User:Jeblad/TensorFlow

To make an efficient envirionment for development it might be necessary to tweak and adapt the setup. That imply diverging versions for the various libraries, and even libraries that should not be defined on the computer as such.

CUDA and TensorFlow is no different in this respect.

Virtualenvironment on main computer
Of some reason you want to run the CUDA and/or TensorFlow on the bare metal.

Environment at main computer
Installation of CUDA:


 * 1) Open "System Settings"
 * 2) Open "Software and Updates"
 * 3) Open "Additional Drivers"
 * 4) Select one of the Nvidia Drivers and click "Apply Changes"
 * 5) Reboot the system.
 * 6) Open a terminal window and type  . This will identify the graphics card.

Download correct debian package from.

Virtualization of PCI hardware for Vagrant
Your PC must satisfy the following:
 * 1) Your motherboard has an IOMMU unit.
 * 2) Your CPU supports the IOMMU.
 * 3) The IOMMU is enabled in the BIOS.
 * 4) The VM must run with VT-x/AMD-V and nested paging enabled.
 * 5) Your Linux kernel was compiled with IOMMU support. The PCI stub driver is required as well.
 * 6) Your Linux kernel recognizes and uses the IOMMU unit. Search for DMAR and PCI-DMA in kernel boot log.

Environment at host
Now edit the  and add new limits for memory and cpus.

Environment at client
There are additional notes for CUDA at Ubuntus help pages.

CUDA
Pre-install actions:

Installation Instructions:

I put the deb in my vagrant dir on the host, and then referred i as  inside the client.

Post-install actions: Add the following to, but note the versioning, it must match the names in use!

Install cuDNN, note this is behind a member wall: Add to your build and link process by adding -I to your compile line and -L -lcudnn to your link line.

TensorFlow
Play it safe and setup both python 2.7 and 3.5?

Manual testing
Open the interactive shell with  and run