Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.


follow
Abonnieren
Send Tip
July 30, 2024

Hello and welcome to Eye on AI!


Canva, the Australian graphic design platform with over 190 million monthly users, announced Monday that it plans to acquire Leonardo.AI, a popular Australian image generation startup that has raised nearly $39 million in funding, and would integrate its models into Canva’s generative AI tools.


Upon hearing the news, I immediately recalled that the startup had gotten some unwanted attention back in March, after 404 Media reported on its lack of guardrails against users generating nonconsensual deepfake porn—and compared it to the similarly criticized Civitai, another image generation community that is backed by Andreessen Horowitz


I asked Canva cofounder Cameron Adams about that in an interview yesterday. He responded by saying that Leonardo has “definitely done a ton of work to tighten up their systems and has a stronger focus on trust and safety.” 


I’m sure he’s right about that—in fact, Reddit threads are filled with Leonardo users complaining that the filters are now too restrictive. “Your Content Filters Are Ridiculous!” wrote one Reddit user last week, bemoaning the fact that seemingly innocuous phrases like “Black speckled markings on his lips and nose” were blocked. 


Balancing creativity and safety is an issue that is always evolving, said Adams. “You need to keep up with all the types of content that people are going to try and create,” he explained. “You need to be constantly monitoring them, adjusting them, and making sure that they meet your values.” 


But is Leonardo’s evolution—that also includes its own recently released AI model, called Phoenix—enough to forget its unfiltered past? And can it avoid future controversies, such as copyright lawsuits, around how its models were trained? To be clear, those questions aren’t just for Leonardo—they also apply to other image generation startups rumored to be seeking buyers, such as Character AI, founded by a prominent former Google researcher, and Stability AI, whose model Stable Diffusion was used by Leonardo to launch its platform, and which has been challenged in several copyright-focused lawsuits.


Still, it’s interesting to see a company like Leonardo pivot towards an acquisition by the increasingly B2B-focused Canva, which offers brands the opportunity to create assets for marketing and advertising campaigns. Canva, which was founded in 2013, was also early to the generative AI game, launching its AI-generated Magic Write tool in December 2022, just weeks after OpenAI’s ChatGPT launched. 


But in many ways, it is also a good fit for the two Australian companies: Leonardo, which boasts a community of over 19 million users (Canva said it would continue to offer Leonardo as a standalone tool), also targets creative professionals and teams looking to create graphic design, concept art, marketing and fashion imagery. One of its key features is the ability to train small models on its platform using specific data—a set of photos, for example—so that images include the same character. 


Adams mostly waved away any concerns about Leonardo’s past controversies. The startup’s recently released Phoenix model is trained on “publicly available” data and open data from Creative Commons, he said—though there is no proof that this does not include scraped copyrighted data from the web. 


In any case, Canva’s enterprise business clients don’t have to worry about those issues, because Canva has long-offered indemnification for those customers. Meanwhile, Canva’s strict Terms of Service place liability for any image output issues on its customers. And, in general, as more large companies integrate AI models into their platforms and tools, it abstracts issues such as copyright farther and farther away from the original training data—something only future legal rulings and regulations will be able to tackle.


There is no doubt, though, that text and image models have proved fairly easy to hack with the right prompts—and it will remain difficult to keep fully protected. One popular hacker, for example, who goes by Pliny the Prompter on X, has posted about being able to quickly generate deepfakes and NSFW (Not Safe For Work) content on image generators like Midjourney and Stability AI. If Phoenix is anything like the others, he told Fortune, it would not be hard to get it to output deepfake and porn images, though he admitted that image models could be more difficult to “jailbreak” than text. “There’s a whole lot of randomness, so it can take a bit of luck and a lot of retries,” he said. 


But he did not buy Leonardo’s claim that its robust filters could keep hackers from getting around guardrails. “If I had a nickel for every time I’ve heard that,” he said. 


With that, here’s more AI news.


Sharon Goldman
[email protected]
@sharongoldman


.


.

AI IN THE NEWS


Meta will roll out tool to allow users to create, share, design personalized AI chatbots. According to Reuters, Meta will roll out AI Studio, a tool that will allow users to create customized AI characters and also allow Instagram creators to use the AI characters "as an extension of themselves" that can handle common DM questions and story replies, Meta said. Users can share the AI characters on Facebook, Instagram, and WhatsApp. The new tool is built with Meta's Llama 3.1, the largest version of its family of open-source AI models that was released last week.


Shutterstock offers companies an AI tool to create generative 3D models. Shutterstock, moving even farther away from its stock photo origins, announced the launch of its 3D model AI generator built on NVIDIA Edify's multimodal architecture. It was trained exclusively on Shutterstock content, including more than half a million ethically sourced 3D models, more than 650 million images, and detailed Shutterstock metadata. “With our new generative 3D capabilities, studios and developers can revolutionize their pipelines, leveraging the only generative 3D service entirely trained on licensed data to ensure fair compensation for the original creators who also have the option to opt-out,” said Dade Orgeron, vice president of innovation at Shutterstock, in a press release.


At least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025, according to new research from Gartner. This result, which Gartner explained is “due to poor data quality, inadequate risk controls, escalating costs or unclear business value,” speaks to the many challenges facing organizations looking to justify the substantial investment in gen AI to enhance productivity and create new business opportunities–which can range from $5 million to $20 million. At the Gartner Data & Analytics Summit in Sydney this week, Gartner analyst Rita Sallam said that after the gen AI hype over the past year, executives are “impatient to see returns on GenAI investments, yet organizations are struggling to prove and realize value.” While those companies are widening the scope of their investment, “the financial burden of developing and deploying GenAI models is increasingly felt.”


Elon Musk shared a video ad featuring a Kamala Harris voice clone. An AI-generated voice clone of Vice President Kamala Harris, shared on X by Elon Musk—in violation of the site’s policy—raised concerns about AI disinformation as the U.S. election looms, the Associated Press reported. The AI-generated voice was featured in a fake ad that used many of the same visuals as a real ad that Harris released as part of her presidential campaign launch. In the video, the AI voice says: “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” and claims Harris is a “diversity hire” because she is a woman and a person of color


EYE ON AI RESEARCH


‘Compound’ AI systems are gaining ground in tackling LLM accuracy and reliability. For much of the past year, a technique called RAG–or retrieval-augmented generation–has been all the rage. By combining a model and a data retrieval system, RAG can give an existing AI model new information it was never trained on in order to perform a specific task. Since then, the idea of "compound" AI systems has gained steam, including a Berkeley AI Research blog post that noted the trend, saying that “state-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.” 


Now, a group of researchers, including two founders of Databricks, have published an interesting paper showing they have constructed a system they call Networks of Networks (NoNs) that they demonstrated had surpassed the performance of GPT-4 “by a wide margin.” “Best part is, we didn’t have to train anything new...and all the research only cost a few tens of thousands," said Jared Quincy Davis, one of the paper’s authors. “In some ways, this compound systems stuff is giving the AI research community GPT-5 or 6 equivalent capabilities."


FORTUNE ON AI


Singapore’s AI ambitions: How the city-state is keeping up in an arms race dominated by the U.S. and China —by Clay Chandler


Can Microsoft keep it up? Investors focus on cloud and eye AI revenues ahead of Q2 earnings —by Greg McKenna


Sam Altman issues call to arms to ensure ‘democratic AI’ will defeat ‘authoritarian AI’ —by Jason Ma


An AI-dominated future with no jobs could be a dream or a nightmare —by Geoff Colvin


Startup with ‘radical’ concept for AI chips emerges from stealth with $15 million to try to challenge Nvidia —by Jeremy Kahn


OpenAI’s new SearchGPT takes aim at Google in the battle for AI search dominance. But will it win the war? —by Sharon Goldman



.

AI CALENDAR


Aug. 12-14: Ai4 2024 in Las Vegas


Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia


Dec. 9-10: Fortune Brainstorm AI San Francisco (register here


.
Email Us
Abonnieren
share: Share on Twitter Share on Facebook Share on Linkedin
.
This message has been sent to you because you are currently subscribed to Eye on A.I..
Unsubscribe

Please read our Privacy Policy, or copy and paste this link into your browser:
https://fortune.com/privacy/

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
40 Fulton Street
New York, NY 10038


Advertising Info | Subscribe to Fortune