Amazon is the company you’d expect to be at the forefront of commercializing AI—they have the computing power of AWS and Alexa-style expertise. In the small game, Amazon now offers a free coding assistant, CodeWhisperer, and other promises.
Surprisingly, they beat Microsoft’s offer by offering $10 a month for free. CodeWhisperer isn’t out of the blue – beta testing begins June 2022, see CodeWhisperer’s Amazon Preview. Now it’s free for anyone who wants to try it out, not just AWS users.
It works with Python, Java, JavaScript, TypeScript, and C#, as well as ten new languages, including Go, Kotlin, Rust, PHP, and SQL. CodeWhisperer can be accessed through IDEs such as VS Code, IntelliJ IDEA, and AWS Cloud9, and CodeWhisperer can be accessed through the AWS Toolkit IDE extension. Note that these are AWS-specific services, so they may not be of interest only to programmers.
After a few days of use, the overall impression does not seem to be as good as Microsoft’s CoPilot. It is probably based on CoPilot GPT3.5 and therefore trained on different things, while CodeWhisperer is trained on custom code. CodeWhisperer is trained on billions of lines of code from open source repositories, Amazon internal repositories, API documentation, and forums. Unfortunately, there’s no detailed documentation on how it was manufactured or built, so we can only speculate.
Will the presence of CodeWhisperer force Microsoft to lower the price of Copilot? Comparing similar products, Copilot for Business and CodeWhisperer Professional both cost $20/user/month. This means that Microsoft’s price pressure is less than it looks. Also, at $10 per month, the minimum cost of Copilot is cheap for a working tool, even compared to the zero cost of an inactive tool.
Around the same time CodeWhisperer was announced as being available, Amazon announced Bedrock, calling it the Platform Model (FM) instead of the LLM (Large Language Model). Bedrock is not a single model, but a collection:
“Bedrock customers can choose from today’s cutting-edge FMs, including AI21 Labs’ Jurassic 2 multilingual LLM series, which uses natural language instructions to translate Spanish, French, and German , Portugal, Italy and the Netherlands Crowder and Anthropic LL. M., can handle a wide range of conversational and word processing tasks, and are also based on Anthropic’s extensive research on training AI systems honest and responsive, allowing you to create photo-like images, artwork, logos and designs.
According to Amazon, these can also be killer features:
“Customers simply point Bedrock to a few labeled samples in Amazon S3 and the service optimizes the model for a specific task without needing to annotate a lot of data (20 samples is enough).”
Amazon also offers dedicated AWS instances to run AI software:
announced today
Publicly available Inf2 instances powered by AWS Inferentia2, models are optimized for large-scale generalized AI applications involving hundreds of billions of parameters. Inf2 offers 4x more bandwidth and 10x more latency than the previous generation based on inferentia inf2. They also feature extremely high-speed connectivity between accelerators to support large-scale distributed inference. These features deliver 40% more performance with inference costs compared to other comparable Amazon EC2 instances and the lowest inference costs in the cloud.
This could be a dairy cow for Amazon, according to a recent quote from Amazon CEO Andy Jassy:
“A lot of companies want to use these big language models, but really good language models require billions of dollars and years of training, and most companies don’t want to do that.”
Admittedly, the world of AI has changed a lot since the basic model was born. Previously, companies or researchers had to decide on the required architecture, how many layers, what feedback function, how many neurons etc think, then collect data and train model. Today’s AI efforts are increasingly focused on reusing and refining trained models—a big shift because we’ve empowered billions of dollars worth of companies to control. train these models, so we use all of them.