AWS officially launched AI products to join the global artificial intelligence war

At the AWS re:invent conference held at the end of November 2016, AWS officially launched its own AI product line. With this as a sign, AWS officially joined the global artificial intelligence war.

Matt Wood, general manager of AWS product strategy, said there has been a lot of demand for machine learning in the past 5-6 years, and these requirements are mainly influenced by three factors: algorithms, performance calculations, and rich and large data sets. In terms of algorithms, although today's algorithms don't change much compared to 20 years ago, they are starting to become more complex and incorporate artificial intelligence algorithms. In terms of rich and large data sets, the availability and use of rich and massive data is possible due to the emergence of cloud computing. In terms of performance calculations, GPUs can create highly complex and scaled computing power that combines algorithms with massive amounts of data.

Matt said that the emergence of deep learning solves some special scenarios in computational science, including speech recognition, image recognition, natural language recognition, personalized recommendation, and autonomic computing (autopilot and autonomous robots). But in the real world, there are many challenges in applying and using deep learning algorithms: programmability, portability, and performance. Among them, programmability refers to how to greatly simplify the complexity of neural network programming. Portability refers to the use of these neural network models on mobile phones and smart cars after training neural networks in the cloud. Performance refers to neurons. Network training and reasoning performance.

Just a few weeks before the 2016 AWS re:invent conference, AWS launched a GPU computing instance called P2 (Cloud Server), which is designed to support machine learning, high-performance computing, and other applications that require massive floating-point parallel computing. Several open source machine learning computing frameworks are available for provisioning optimization, including mxnet, TensorFlow, Caffe, theano, Torch, and CNTK. P2 provides up to 42,000 CUDA computing cores, and AWS also pre-built a P2-based deep learning computing cluster, making it easy for average programmers to complete large-scale machine learning programming and applications.

AWS officially launched AI products to join the global artificial intelligence war

Matt Wood, General Manager, AWS Product Strategy

Matt said that since the P2 product was launched, it has received a lot of attention from programmers. A Chinese startup called Tucson is using P2 for the development of smart car algorithms. By using in-vehicle image recognition devices and deep learning algorithms, Tucson software can identify the periphery of a moving car during the day and night, and can also be accurate in flat images. Identify the position of the car in 3D space with an accuracy of centimeter.

In all open source deep learning frameworks, AWS chose mxnet as the platform of choice for official machine learning. Matt said that mxnet was chosen because of its programmability, portability and performance. First, mxnet supports a wide range of computing languages ​​including Go, Python, Scala, R, Java Script, C++, etc. Because mxnet is actually composed of two layers, the front end and the back end. The front end supports multiple computing languages, but The unified system and generic operators are used on the back end.

Secondly, in the choice of programming mode, mxnet supports both the command line programming and descriptive programming. For languages ​​such as Python or Groovy that support command-line programming, command-line programming is flexible and uses many of the native features provided by the language itself, but it is difficult to optimize and use for large-scale computing. For descriptive programming languages ​​such as SQL, it is easier to optimize and support more languages, but it lacks flexibility. Caffe, theano, Caffe, etc. fall into this category. Mxnet allows programmers to get both programming methods at the same time, greatly improving the efficiency and flexibility of deep learning programming.

Third, in terms of portability, mxnet requires only a small amount of storage space to run, and thousands of layers of neuron networks require only 4G of storage space. The well-trained neuron model in the cloud can be deployed in robots such as smartphones, IoT devices, drones, and even Java Script-based browsers, so the portability of mxnet programs is very good.

Fourth, mxnet can also create better performance in writing parallel high-performance computing. Mxnet can easily adapt serial algorithms to parallel computing mode and run on multiple CUDA cores in a single GPU, and can easily be extended to multiple GPUs.

Based on mxnet, AWS officially launched its own AI product line at this re:invent conference. The first products include text-to-speech services that support 47 voices in 24 languages, Amazon Polly, deep learning-based images and face recognition. Serve Amazon RekogniTIon and Amazon Lex, which can write natural human interaction. Among them, Amazon Lex uses the same automatic language recognition and natural language understanding technology for Amazon Alexa. Integrated with the AWS programming environment, Lamda, developers can easily use Lex to develop a variety of chat bots and service bots, and use them for web applications, instant messaging tools, or add natural language interactions for mobile phones and IoT devices. .

Since November 30, 2016, Amazon Polly and Amazon RekogniTIon have been launched in the AWS region in the eastern United States, the western United States, and Europe. In the future, they will be extended to other regions, and users will start applying for Amazon Lex previews from now on. It is.

Serial Port Communication Card

Snmp Card,Ups Monitoring,Remote Controle Card,Serial Communication Interface Card

Shenzhen Unitronic Power System Co., Ltd , https://www.unitronicpower.com