TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
TensorFlow 2.7 is released and improves usability with clearer error messages, simplified stack traces, and adds new tools and documentation for users migrating to TF2.
The process of debugging your code is a fundamental part of the user experience of a machine learning framework. In this release, we’ve considerably improved the TensorFlow debugging experience to make it more productive and more enjoyable, via three major changes: simplified stack traces, displaying additional context information in errors that originate from custom Keras layers, and a wide-ranging audit of all error messages in Keras and TensorFlow.
TensorFlow is now filtering by default the stack traces displayed upon error to hide any frame that originates from TensorFlow-internal code, and keep the information focused on what matters to you: your own code. This makes stack traces simpler and shorter, and it makes it easier to understand and fix the problems in your code.
If you’re actually debugging the TensorFlow codebase itself (for instance, because you’re preparing a PR for TensorFlow), you can turn off the filtering mechanism by calling
One of the most common use cases for writing low-level code is creating custom Keras layers, so we wanted to make debugging your layers as easy and productive as possible. The first thing you do when you’re debugging a layer is to print the shapes and dtypes of its inputs, as well the value of its
mask arguments. We now add this information automatically to all stack traces that originate from custom Keras layers.
See the effect of stack trace filtering and call context information display in practice in the image below:
Lastly, we’ve audited every error message in the Keras and TensorFlow codebases (thousands of error locations!) and improved them to make sure they follow UX best practices. A good error message should tell you what the framework expected, what you did that didn’t match the framework’s expectations, and should provide tips to fix the problem.
We have improved two common types of
tf.function error messages: runtime error messages and “Graph” tensor error messages, by including tracebacks pointing to the error source in the user code. For other vague and inaccurate
tf.function error messages, we also updated them to be more clear and accurate.
For the runtime error message caused by the user code
@tf.function def f(): l = tf.range(tf.random.uniform((), minval=1, maxval=10, dtype=tf.int32)) return l
A summary of the old error message looks like
# … Python stack trace of the function call … InvalidArgumentError: slice index 20 of dimension 0 out of bounds. [[node strided_slice (defined at <'ipython-input-8-250c76a76c0e'>:5) ]] [Op:__inference_f_75] Errors may have originated from an input operation. Input Source operations connected to node strided_slice: range (defined at <ipython-input-8-250c76a76c0e >':4) Function call stack: f
A summary of the new error message looks like
# … Python stack trace of the function call …
InvalidArgumentError: slice index 20 of dimension 0 out of bounds. [[node strided_slice (defined at <ipython-input-3-250c76a76c0e>:5) ]] [Op:__inference_f_15] Errors may have originated from an input operation. Input Source operations connected to node strided_slice: In range (defined at <ipython-input-3-250c76a76c0e>:4) In strided_slice/stack: In strided_slice/stack_1: In strided_slice/stack_2: Operation defined at: (most recent call last) # … Stack trace of the error within the function … >>> File "<ipython-input-3-250c76a76c0e>", line 7, in <module> >>> f() >>> >>> File "<ipython-input-3-250c76a76c0e>", line 5, in f >>> return l >>>
The main difference is runtime errors raised while executing a tf.function now include a stack trace which shows the source of the error, in the user’s code.
# … Original error message and information … # … More stack frames … >>> File "<ipython-input-3-250c76a76c0e>", line 7, in <module> >>> f() >>> >>> File "<ipython-input-3-250c76a76c0e>", line 5, in f >>> return l >>>
For the “Graph” tensor error messages caused by the following user code
x = None @tf.function def leaky_function(a): global x x = a + 1 # Bad - leaks local tensor return a + 2 @tf.function def captures_leaked_tensor(b): b += x return b leaky_function(tf.constant(1)) captures_leaked_tensor(tf.constant(2))
To support users interested in migrating their workloads from TF1 to TF2, we have created a new
Migrate to TF2 tab on the TensorFlow website, which includes updated guides and completely new documentation with concrete, runnable examples in Colab.
A new shim tool has been added which dramatically eases migration of variable_scope-based models to TF2. It is expected to enable most TF1 users to run existing model architectures as-is (or with only minor adjustments) in TF2 pipelines without having to rewrite your modeling code. You can learn more about it in the model mapping guide.
Since the last TensorFlow release, the community really came together to make many new models available on TensorFlow Hub. Now you can find models like MLP-Mixer, Vision Transformers, Wav2Vec2, RoBERTa, ConvMixer, DistillBERT, YoloV5 and many more. All of these models are ready to use via TensorFlow Hub. You can learn more about publishing your models here.
At iRender, we provide a fast, powerful and efficient solution for Deep Learning users with configuration packages from 1 to 6 GPUs RTX 3090 on both Windows and Ubuntu operating systems. In addition, we also have GPU configuration packages from 1 RTX 3090 and 6 x RTX 3090. With the 24/7 professional support service, the powerful, free, and convenient data storage and transferring tool – GPUhub Sync, along with an affordable cost, make your training process more efficient.
Register an account today to experience our service. Or contact us via WhatsApp: (+84) 912 785 500 for advice and support.
Thank you & Happy Training!
Reference source: tensorflow.com