May 2024
In this month's newsletter we cover Amazon's open-source model for time series forecasting, a new knowledge graph framework that uses LLMs to discern commonsense relationships, research collaborations with UW and Columbia, new features for Amazon Bedrock, and more. Be sure to check out our latest career opportunities, which includes internships and programs for academics.
Deep dives
Adapting language model architectures for time series forecasting: Amazon researchers released the training code for Chronos, a family of pretrained, open-source models for time series forecasting, which are built on a language model architecture and trained with billions of tokenized time series observations to provide accurate zero-shot forecasts for new time series.
Evaluating the helpfulness of AI-enhanced catalogue data: The Amazon Catalog team uses generative AI to make product information more useful and A/B testing to evaluate enriched data. Two team members explain how causal random forests and Bayesian structural time series help them extrapolate from sparse A/B data.
Building commonsense knowledge graphs to aid product recommendation: To improve Amazon’s recommendation engine, Amazon researchers are building a knowledge graph that encodes commonsense relationships between products and queries. At SIGMOD/PODS 2024, they'll present COSMO, a framework that uses LLMs to extract those relationships.
More reliable nearest-neighbor search with deep metric learning: Deep metric learning is a powerful tool, but it yields inconsistent distances between data embeddings, which can hamper nearest-neighbor search. At this year's ICLR, Amazon researchers showed how to make distances more consistent, improving model performance.
Generalizing diffusion modeling to multimodal, multitask settings: At this year's ICLR, Amazon scientists showed how to generalize diffusion models to multimodal, multitask settings. The keys are a loss term that induces the model to recover modality in the reverse process and a way to aggregate inputs of different modalities.
Adapting neural radiance fields (NeRFs) to dynamic scenes: At AAAI, Amazon scientists introduced a novel approach that significantly advances our ability to capture and model scenes with complex dynamics. Their work not only addresses previous limitations but also opens doors to new applications ranging from virtual reality to digital preservation.
How Project P.I. helps Amazon remove imperfect products: Find out how Amazon is using generative AI and computer vision to process multimodal information by synthesizing evidence from images captured during the fulfillment process and combining it with written customer feedback to help uncover both defects and, wherever possible, their cause — to address issues at the root before a product reaches the customer.
Academic collaborations
New circuit boards can be repeatedly recycled: With the support of an Amazon Research Award, a team of researchers led by the University of Washington have developed a new printed circuit board (PCB) that performs on par with traditional materials and can be recycled repeatedly with negligible material loss. Learn more about the team’s research, which was published in Nature Sustainability.
At USC + Amazon Center Symposium, Speakers Highlight Ties Between University and Industry: Amazon and the University of Southern California hosted their third annual symposium for the USC-Amazon Center on Trustworthy AI, a research collaboration between USC faculty and students and Amazon scientists and engineers, which launched in 2021 and focuses on the development of new approaches to machine learning privacy, security, and trustworthiness. The event featured presentations on the Center's recently announced research projects, and a poster competition.
This robot predicts when you're going to smile – and smiles back: In a new study published in Science Robotics, researchers introduce a robot which can anticipate facial expressions and execute them simultaneously with a human. The robot has even learned to predict a forthcoming smile about 840 milliseconds before the person smiles, and to co-express the smile simultaneously with the person. The work was supported by the National Science Foundation (NSF), and Amazon through the Center of AI Technology (CAIT) at Columbia University.
Penn Engineering Ph.D. students receive funding from Amazon to advance trustworthy AI: To support the responsible development and regulation of AI tools and the next generation of engineers actualizing it, Amazon Web Services (AWS) is funding 10 Ph.D. student research projects at Penn Engineering that focus on advancing safe and responsible AI.
98 Amazon Research Awards recipients announced: The recipients, representing 51 universities in 15 countries, will have access to Amazon datasets, AWS AI/ML services and tools, and more. The announcement includes awards funded under six call for proposals during the fall 2023 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography and Privacy, AWS Database Services, and Sustainability.
Registration opens for Amazon's ML Summer School in India: The fourth edition of Amazon's ML Summer School is open to all eligible students from recognized institutes in India who are expected to graduate in 2025 or 2026. The program offers an intensive course on key ML topics, and the opportunity to learn from and interact with Amazon scientists, to help students prepare for a career in machine learning.
In other news
Significant new capabilities make it easier to use Amazon Bedrock to build and scale generative AI applications: "With Amazon Bedrock, we’re focused on the key areas that customers need to build production-ready, enterprise-grade generative AI applications at the right cost and speed. Today I’m excited to share new features that we’re announcing across the areas of model choice, tools for building generative AI applications, and privacy and security." Swami Sivasubramanian, VP of Data and Machine Learning at AWS.
How Amazon is harnessing solar energy, batteries, and AI to help decarbonize the grid: At Baldy Mesa, a solar farm enabled by Amazon, and developed, owned, and operated by The AES Corporation, machine learning models powered by AWS are helping predict when and how the project’s battery unit should charge and discharge energy back to the grid.
A look at Fire TV's decade of innovation: From voice search to AI-powered entertainment: From the very first Fire TV device to the new AI-powered Fire TV Search experience, learn how Fire TV continues to innovate for customers after 10 years.
Amazon and Meta join the Frontier Model Forum to promote AI safety: "Building AI that our customers can trust is one of the most important scientific challenges of our time. I’m proud to share that Amazon has joined the Frontier Model Forum to work with other industry leaders and the government to advance AI safely and securely." Rohit Prasad, SVP and Head Scientist, Artificial General Intelligence at Amazon.
Navigating the scientist-to-manager transition: Join Amazon researchers on June 11 for an online panel about managing science teams, featuring Amazon science managers Federica Cerina, Martin Gross, and Mauro Piacentini, who will share their personal journeys, key lessons learned, and strategies for successfully bridging the gap from individual contributor to people leader.
Amazon wins Best Student Paper award: At this year's IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Amazon researchers won the Best Student Paper award for their publication, 'Significant ASR error detection for conversational voice assistants', which proposes a system that can determine, to a high degree of accuracy, whether the semantics of a predicted and reference transcript are significantly different.
Upcoming conferences
Learn more about Amazon's presence at the following conferences:
SIGMOD/PODS 2024: June 9 - 14
NAACL 2024: June 16 - 21
CVPR 2024: June 17 - 21
SIGIR 2024: July 14 - 18
New publications
A deep dive into large language models for automated bug localization and repair
Agenda-driven question generation: A case study in the courtroom domain
AutoGluon-Multimodal (AutoMM): Supercharging multimodal AutoML with foundation models
BELIEVE: Belief-enhanced instruction generation and augmentation for zero-shot bias mitigation
Bosehedral: Compiler optimization for Bosonic quantum computing
Combining multiple metrics for evaluating retrieval-augmented conversations
Counterfactual ranking evaluation with flexible click models
Diversified ensembling: An experiment in crowdsourced machine learning
Explainable uncertainty attribution for sequential recommendation
FairRAG: Fair human generation via fair retrieval augmentation
Fine-to-coarse entailment hierarchy construction for coarse-to-fine story generation
GIO: Gradient information optimization for training dataset selection
High precision map conflation of fleet sourced traffic signs
Identifying shopping intent in product QA for proactive recommendations
iEdit: Localised text-guided image editing with weak supervision
Improving multi-hop reasoning in LLMs by learning from rich human feedback
Large language models for preventing medication direction errors in online pharmacies
Leveraging interesting facts to enhance user engagement with conversational interfaces
Mitigating bias for question answering models by tracking bias influence
Prompting vision-language models for aspect-controlled generation of referring expressions
Question suggestion for conversational shopping assistants using product metadata
RA-NER: Retrieval augmented NER for knowledge intensive named entity recognition
Radar-based localization for autonomous ground vehicles in suburban neighborhoods
Self-supervision improves diffusion models for tabular data imputation
Striking the right chord: A comprehensive approach to Amazon Music search spell correction
TouchUp-G: Improving feature representation through graph-centric finetuning
WikiDT: Visual-based table recognition and question answering dataset
LinkedIn | X/Twitter | Facebook | Instagram | GitHub | RSS
© 1996-2024 Amazon.com, Inc. or its affiliates | Privacy | Conditions of Use
Digital Healthcare Leader | Nurturing Sustainable Value: A Servant’s Approach to Digital Excellence | Creative Catalyst | Curious Compassion | Bringing Augmented Intelligence to Life
1wThe tokenization concepts explored in Chronos are particularly intriguing. The superior zero shot performance against task-specific models is very encouraging, as is the incremental improvement through fine tuning on domain data. It will be interesting to understand how this method interacts with hallucination and over-fit challenges commonly seen on the language side when fine tuning on smaller / divergent datasets. Thank you, Amazon Science for this insightful and engaging work!
Principal Product Marketing Manager, Amazon Science
1mo🙌🏼
AUTO PARTS
1moThanks for sharing🙂🙂🙂
16+ Years' Recruitment Experience for India and Africa | Professional Resume Writer | Talent Acquisition Expert since 2007 | LinkedIn Profile Moderator | Host of Expert Talk Show by Vipul The Wonderful
1moGood to know!