Why Tesla using Pytorch for Autopilot

Why Tesla using Pytorch for Autopilot

Tesla Motors is credited with inventing the self-driving car movement. They're also known for achieving high levels of reliability in autonomous vehicles without the use of LIDAR or high-resolution maps. Tesla automobiles rely entirely on computer vision.

 When it comes to the intelligence of the autopilot, Tesla is a vertically integrated organization. Machine learning and raw video streams from eight cameras around the vehicle are used to make Tesla autopilot the best in the world. Convolutional neural networks (CNNs) are used to detect objects from the footage captured by these cameras. The footage from these cameras is processed through convolutional neural networks (CNNs) for object detection and performing other actions eventually.

 The collected data is labelled, training is done on on-premise GPU clusters and then it is taken through the entire stack. The networks are run on Tesla's own custom hardware giving them full control over the lifecycle of all these features, which are deployed to almost 4 million Tesla’s around the world. For instance, a single frame from the footage of a single camera can contain the following: Road markings, Traffic lights, Overhead signs, Crosswalks, Moving objects, Static objects, Environment tags.

Tesla utilizes Pytorch for distributed CNN training. For autopilot, Tesla trains around 48 networks that do 1,000 different predictions and it takes 70,000 GPU hours. Furthermore, this training is an iterative process, and all of these workflows should be automated to ensure that the 1,000 different predictions do not regress over time.

When it comes to machine learning frameworks, TensorFlow and PyTorch are widely popular with the practitioners. There is no other framework that even comes close to what these two Google and Facebook products have in store. These frameworks have steadily gained popularity in the AI culture. For machine learning researchers, PyTorch has been the go-to system.

 PyTorch citations in ArXiv papers increased 194% in the first half of 2019, although the number of contributors to the site increased by more than 50%.

It is increasingly being used as the basis for the most significant machine learning research and development workloads by companies including Microsoft, Uber, and others across industries.

With the introduction of PyTorch 1.3, the framework has received a much-needed boost, with experimental support for features like smooth model distribution to mobile devices, model quantisation for improved performance at inference time, and front-end enhancements like the ability to name tensors and write clearer code with less inline comments.

The team also has plans to release a range of additional tools and libraries to help model interpretability and the development of multimodal research. In addition, the PyTorch team partnered with Google and Salesforce to add broad support for Cloud Tensor Processing Units, allowing for substantially faster training of large-scale deep neural networks.

PyTorch's timely updates correspond to Elon Musk's self-imposed deadlines for his Tesla squad. Tesla wants to go fully autonomous in the next few years, thanks to the success of their Smart summon, and we can confidently presume that it has chosen PyTorch to do the real work.

What's Your Reaction?

like
1
dislike
0
love
1
funny
1
angry
0
sad
1
wow
0