# My perspective on Tensorflow

Tensorflow is a framework for deep learning recently released by Google. It is based on a set of Theano libraries (heavily relying on tensors – thus the name of the project). But it’s much more than Theano. Together with the symbolic manipulation of variables, much in a programmatic approach, Tensorflow greatly simplifies building and optimization of complex deep architectures. It includes several learning strategies (much beyond the tradition stochastic gradient descendent), transfer functions (ReLU, tanh, sigmoid) and regularizations methods (L1, L2, Dropout).

Building a CNN, a stacked autoencoder or LSTM is pretty straightforward. The only component missing is a pipeline, similar to Scikit-learn, pretty handy for meta-parameters optimization. Of course it comes with GPU integration (key for any deep learning execution).

Before Tensorflow, Keras was my favorite framework. But tensorflow is much more powerful as it contains a full stack of functionalities, including plotting (the recent integration of d3viz) that allow rendering astonish interactive graphs. The debugging functionalities are very cool since we can introduce and retrieve results of discretionary data on any edge of the graph. Debugging complicated computational graphs enters a completely new level.

The only thing I see concerns is on the performance side. Personally I haven’t done exhaustive computations (bear in mind that training a model can take several hours or several days on a decent equipped machine), but I don’t see much a difference and the time it saves in debugging and building model more than compensates. See this post for a complete comparison of deep learning frameworks.

Some examples can be retrieved from the homepage of the project (try the classical MNIST with a convolutional neural network). This Github repository also has very nice and well annotated tutorials.

Enjoy.