The week in AI: OpenAI attracts big-pocketed rivals in Anthropic and Musk

by Ana Lopez

Keeping up in an industry that evolves as fast as AI is quite a task. So until an AI can do it for you, here’s a handy roundup of the past week’s stories in the world of machine learning, along with notable research and experiments that we’ve not covered alone.

The biggest news of the past week (we’re politely disregarding our Anthropic story) was the announcement of Bedrock, Amazon’s service that provides a way to build generative AI apps through pre-trained models from startups including AI21 Laboratories, anthropic and stability AI. Currently available in “limited preview”, Bedrock also offers access to Titan FMs (Basic Models), a family of AI models trained internally by Amazon.

It makes perfect sense that Amazon would want a horse in the generative AI race. After all, the market for AI systems that create text, audio, speech and more could be worth more than $100 billion by 2030. according to to Grand View Research.

But Amazon has a motive that goes beyond capturing a slice of a growing new market.

In a recent Motley Fool part, TImothy Green presented compelling evidence that Amazon’s cloud business could slow. The company reported 27% year-over-year revenue growth for its cloud services in the third quarter of 2022, but the rebound slowed to an average of 20% by the end of the quarter. Meanwhile, the operating margin for Amazon’s cloud division fell 4 percentage points year-over-year in the same quarter, suggesting Amazon was expanding too quickly.

Amazon clearly has high hopes for Bedrock, going so far as to train the aforementioned in-house models prior to launch – which probably wasn’t an insignificant investment. And lest anyone question the company’s seriousness about generative AI, Amazon hasn’t put all its eggs in one basket. It this week made CodeWhisperer, the system that generates code from text prompts, free for individual developers.

So, will Amazon capture a meaningful slice of the generative AI space, reviving its cloud business in the process? It’s a lot to hope for, especially given the technology’s inherent risks. Time will tell, eventually, as the dust settles in generative AI and competitors big and small emerge.

Here are the other AI headlines from the past few days:

  • The wide, wide world of AI regulation: Everyone seems to have their own ideas about how to regulate AI, and that means about 20 different frameworks in every major country and economic zone. Natasha dives deep into the nitty gritty with this exhaustive (at the moment) list of regulatory frameworks (including outright bans like ChatGPT’s in Italy) and their potential effects on the AI ​​industry where they reside. However, China is doing its own thing.
  • Musk takes on OpenAI: Not content with dismantling Twitter, Elon Musk is reportedly planning to take on his former ally OpenAI, and is currently trying to raise the money and people needed to do so. The busy billionaire can use the resources of his various companies to speed up work, but there is good reason to be skeptical about this endeavor, Devin writes.
  • The elephant in the room: Start up AI research Anthropic aims to raise as much as $5 billion over the next two years to beat rival OpenAI and enter more than a dozen major industries, according to corporate documents obtained by businessupdates.org. In the documents, Anthropic says it plans to build a “frontier model” — tentatively dubbed “Claude-Next” — 10 times more capable than today’s most powerful AI, but that would require $1 billion over the next 18 months. will be .
  • Build your own chatbot: Called an app Poe now lets users create their own chatbots using prompts in conjunction with an existing bot, such as OpenAI’s ChatGPT, as a base. First publicly launched in February, Poe is the latest product from the Q&A site Quorathat has long provided web searchers with answers to the most frequently googled questions.
  • Beyond Diffusion: While the diffusion models used by popular tools like Midjourney and Stable Diffusion may seem like the best we’ve got, the next thing is always coming – and OpenAI may have hit it with “consistency models”, which can already perform simple tasks and order of magnitude faster than DALL-E, Devin reports.
  • A small city with AI: What would happen if you filled a virtual city with AIs and unleashed them? Researchers at Stanford and Google tried to find out in a recent experiment with ChatGPT. Their attempt to create a “credible simulation of human behavior” was most likely successful – the 25 ChatGPT-powered AIs were convincingly, surprisingly human in their interactions.
Interactive Simulacra of Human Behavior

Image Credits: Google / Stanford University

  • Generative AI in the Enterprise: In one piece for TC+, Ron writes about how transformative technologies like ChatGPT could be if applied to the business applications people use every day. However, he notes that getting there requires creativity to design the new AI-powered interfaces in an elegant way so they don’t feel screwed down.

Table of Contents

More machine learning

Image Credits: meta

meta open-source a popular experiment that allowed people to animate drawings of people, no matter how gross they were. It’s one of those unexpected uses of the technology that’s both delightful and totally trivial. Still, people liked it so much that Meta lets the code run freely so anyone can build something into it.

Another Meta-experiment, called Segment everything, made a surprisingly big splash at all. LLMs are so popular right now that it’s easy to forget about computer vision – and even then a specific part of the system that most people don’t think about. But segmentation (identifying and framing objects) is an incredibly important part of any robot application, and as AI continues to infiltrate “the real world,” it’s more important than ever that it can segment…well, anything.

Image Credits: meta

Professor Stuart Russell has taken the businessupdates.org stage before, but our half-hour talks are only superficial interventions. Fortunately, the man routinely gives lectures and lectures and classes on the subject, which are very grounded and interesting due to his long familiarity with the subject, even though they have provocative names like “How not to let AI destroy the world.”

Check out this recent presentation introduced by another TC friend, Ken Goldberg:


Related Posts