May 2024

May 2024

In this month's newsletter we cover Amazon's open-source model for time series forecasting, a new knowledge graph framework that uses LLMs to discern commonsense relationships, research collaborations with UW and Columbia, new features for Amazon Bedrock, and more. Be sure to check out our latest career opportunities, which includes internships and programs for academics.

Deep dives

  • Adapting language model architectures for time series forecasting: Amazon researchers released the training code for Chronos, a family of pretrained, open-source models for time series forecasting, which are built on a language model architecture and trained with billions of tokenized time series observations to provide accurate zero-shot forecasts for new time series.

  • Evaluating the helpfulness of AI-enhanced catalogue data: The Amazon Catalog team uses generative AI to make product information more useful and A/B testing to evaluate enriched data. Two team members explain how causal random forests and Bayesian structural time series help them extrapolate from sparse A/B data.

  • Building commonsense knowledge graphs to aid product recommendation: To improve Amazon’s recommendation engine, Amazon researchers are building a knowledge graph that encodes commonsense relationships between products and queries. At SIGMOD/PODS 2024, they'll present COSMO, a framework that uses LLMs to extract those relationships.

The COSMO framework.
  • More reliable nearest-neighbor search with deep metric learning: Deep metric learning is a powerful tool, but it yields inconsistent distances between data embeddings, which can hamper nearest-neighbor search. At this year's ICLR, Amazon researchers showed how to make distances more consistent, improving model performance.

  • Generalizing diffusion modeling to multimodal, multitask settings: At this year's ICLR, Amazon scientists showed how to generalize diffusion models to multimodal, multitask settings. The keys are a loss term that induces the model to recover modality in the reverse process and a way to aggregate inputs of different modalities.

  • Adapting neural radiance fields (NeRFs) to dynamic scenes: At AAAI, Amazon scientists introduced a novel approach that significantly advances our ability to capture and model scenes with complex dynamics. Their work not only addresses previous limitations but also opens doors to new applications ranging from virtual reality to digital preservation.

  • How Project P.I. helps Amazon remove imperfect products: Find out how Amazon is using generative AI and computer vision to process multimodal information by synthesizing evidence from images captured during the fulfillment process and combining it with written customer feedback to help uncover both defects and, wherever possible, their cause — to address issues at the root before a product reaches the customer.

A combination of generative AI and computer vision imaging tunnels is helping Amazon proactively improve the customer experience.

Academic collaborations

  • Penn Engineering Ph.D. students receive funding from Amazon to advance trustworthy AI: To support the responsible development and regulation of AI tools and the next generation of engineers actualizing it, Amazon Web Services (AWS) is funding 10 Ph.D. student research projects at Penn Engineering that focus on advancing safe and responsible AI.

  • 98 Amazon Research Awards recipients announced: The recipients, representing 51 universities in 15 countries, will have access to Amazon datasets, AWS AI/ML services and tools, and more. The announcement includes awards funded under six call for proposals during the fall 2023 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography and Privacy, AWS Database Services, and Sustainability.

  • Registration opens for Amazon's ML Summer School in India: The fourth edition of Amazon's ML Summer School is open to all eligible students from recognized institutes in India who are expected to graduate in 2025 or 2026. The program offers an intensive course on key ML topics, and the opportunity to learn from and interact with Amazon scientists, to help students prepare for a career in machine learning.

In other news

Best Student Paper award: “Significant ASR Error Detection for Conversational Voice Assistants” by John Harvill, Rinat Khaziev, Scarlett Li, Randy Cogill, Lidan Wang, Gopinath Chennupati, and Hari Thadakamalla.

Upcoming conferences

Learn more about Amazon's presence at the following conferences:

New publications

LinkedIn | X/Twitter | Facebook | Instagram | GitHub | RSS

© 1996-2024 Amazon.com, Inc. or its affiliates | Privacy | Conditions of Use

Diamond Redmond MSc., MBA

Digital Healthcare Leader | Nurturing Sustainable Value: A Servant’s Approach to Digital Excellence | Creative Catalyst | Curious Compassion | Bringing Augmented Intelligence to Life

1w

The tokenization concepts explored in Chronos are particularly intriguing. The superior zero shot performance against task-specific models is very encouraging, as is the incremental improvement through fine tuning on domain data. It will be interesting to understand how this method interacts with hallucination and over-fit challenges commonly seen on the language side when fine tuning on smaller / divergent datasets. Thank you, Amazon Science for this insightful and engaging work!

Matthew Hepburn

Principal Product Marketing Manager, Amazon Science

1mo

🙌🏼

Thanks for sharing🙂🙂🙂

Vipul M. Mali ↗️

16+ Years' Recruitment Experience for India and Africa | Professional Resume Writer | Talent Acquisition Expert since 2007 | LinkedIn Profile Moderator | Host of Expert Talk Show by Vipul The Wonderful

1mo

Good to know!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics