Qdrant

Qdrant

Softwareentwicklung

Berlin, Berlin 24.677 Follower:innen

Massive-Scale Vector Database

Info

Powering the next generation of AI applications with advanced and high-performant vector similarity search technology. Qdrant engine is an open-source vector search database. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more. Make the most of your Unstructured Data!

Website
https://qdrant.tech
Branche
Softwareentwicklung
Größe
11–50 Beschäftigte
Hauptsitz
Berlin, Berlin
Kunst
Privatunternehmen
Gegründet
2021
Spezialgebiete
Deep Tech, Search Engine, Open-Source, Vector Search, Rust, Vector Search Engine, Vector Similarity, Artificial Intelligence und Machine Learning

Orte

Beschäftigte von Qdrant

Aktualisierungen

  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    𝐐𝐝𝐫𝐚𝐧𝐭 1.11 is all about making a statement. This release focuses on features that improve memory usage and optimize segments. - 𝐃𝐞𝐟𝐫𝐚𝐠𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧: Storage for multitenant workloads is more optimized and scales better. - 𝐎𝐧-𝐃𝐢𝐬𝐤 𝐏𝐚𝐲𝐥𝐨𝐚𝐝 𝐈𝐧𝐝𝐞𝐱 Store less frequently used data on disk, rather than in RAM. - 𝐔𝐔𝐈𝐃 𝐟𝐨𝐫 𝐏𝐚𝐲𝐥𝐨𝐚𝐝 𝐈𝐧𝐝𝐞𝐱: Additional data types for payload can result in big memory savings. 𝘛𝘩𝘦𝘳𝘦 𝘢𝘳𝘦 𝘢𝘭𝘴𝘰 𝘢 𝘧𝘦𝘸 𝘮𝘰𝘳𝘦 𝘢𝘥𝘥𝘪𝘵𝘪𝘰𝘯𝘴 𝘵𝘰 𝘵𝘩𝘦 𝘳𝘦𝘤𝘦𝘯𝘵𝘭𝘺 𝘪𝘯𝘵𝘳𝘰𝘥𝘶𝘤𝘦𝘥 𝘘𝘶𝘦𝘳𝘺 𝘈𝘗𝘐: - 𝐆𝐫𝐨𝐮𝐩𝐁𝐲 𝐄𝐧𝐝𝐩𝐨𝐢𝐧𝐭: Use this query method to group results by a certain payload field. - 𝐑𝐚𝐧𝐝𝐨𝐦 𝐒𝐚𝐦𝐩𝐥𝐢𝐧𝐠: Select a subset of data points from a larger dataset randomly. - 𝐇𝐲𝐛𝐫𝐢𝐝 𝐒𝐞𝐚𝐫𝐜𝐡 𝐅𝐮𝐬𝐢𝐨𝐧: We are adding the Distribution-Based Score Fusion (DBSF) method. 𝘐𝘯 𝘤𝘢𝘴𝘦 𝘺𝘰𝘶 𝘩𝘢𝘷𝘦𝘯'𝘵 𝘤𝘩𝘦𝘤𝘬𝘦𝘥 𝘰𝘶𝘵 𝘵𝘩𝘦 𝘞𝘦𝘣 𝘜𝘐, 𝘧𝘦𝘦𝘭 𝘧𝘳𝘦𝘦 𝘵𝘰 𝘵𝘳𝘺 𝘰𝘶𝘳 𝘯𝘦𝘸𝘦𝘴𝘵 𝘵𝘰𝘰𝘭𝘴 𝘵𝘰 𝘦𝘹𝘱𝘭𝘰𝘳𝘦 𝘺𝘰𝘶𝘳 𝘥𝘢𝘵𝘢: 𝐒𝐞𝐚𝐫𝐜𝐡 𝐐𝐮𝐚𝐥𝐢𝐭𝐲 𝐓𝐨𝐨𝐥: Test the precision of your semantic search requests in real-time. 𝐆𝐫𝐚𝐩𝐡 𝐄𝐱𝐩𝐥𝐨𝐫𝐚𝐭𝐢𝐨𝐧 𝐓𝐨𝐨𝐥: Visualize vector search in context-based exploratory scenarios. 𝐁𝐥𝐨𝐠: https://lnkd.in/dnDM_6yh

    Qdrant 1.11 - The Vector Stronghold: Optimizing Data Structures for Scale and Efficiency - Qdrant

    Qdrant 1.11 - The Vector Stronghold: Optimizing Data Structures for Scale and Efficiency - Qdrant

    qdrant.tech

  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    🛡️ Building Resilient RAG Applications with Guardrails and Semantic Caching Create a system to deliver accurate data retrieval and safe content generation, even when handling complex queries. In this article, Kameshwara Pavan walks us through creating a robust RAG architecture that integrates Qdrant, LiteLLM (YC W23), Redis, and Llama-Guard-3-8b. Key takeaways: ✔ 𝐇𝐲𝐛𝐫𝐢𝐝 𝐒𝐞𝐚𝐫𝐜𝐡 𝐰𝐢𝐭𝐡 𝐐𝐝𝐫𝐚𝐧𝐭: Combines dense and sparse models to enhance the precision of data retrieval. ✔ 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 𝐰𝐢𝐭𝐡 𝐋𝐢𝐭𝐞𝐋𝐋𝐌 𝐚𝐧𝐝 𝐑𝐞𝐝𝐢𝐬: Utilizes semantic caching to speed up processing and improve consistency. ✔ 𝐒𝐚𝐟𝐞𝐭𝐲 𝐰𝐢𝐭𝐡 𝐋𝐥𝐚𝐦𝐚-𝐆𝐮𝐚𝐫𝐝-3-8𝐛: Implements stringent pre- and post-processing checks to ensure content safety and relevance. 🔗 Go deeper into the implementation in the full article: https://lnkd.in/djZ8hb3V

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    🧐 #VectorWeekly: 𝗤𝘂𝗮𝗻𝘁𝗶𝘇𝗲, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝗾𝘂𝗮𝗻𝘁𝗶𝘇𝗲, 𝘁𝗵𝗮𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻. When solutions are moved to production, balancing trade-offs between 𝘴𝘦𝘢𝘳𝘤𝘩 𝘢𝘤𝘤𝘶𝘳𝘢𝘤𝘺, 𝘮𝘦𝘮𝘰𝘳𝘺 𝘶𝘴𝘢𝘨𝘦, and 𝘴𝘦𝘢𝘳𝘤𝘩 𝘴𝘱𝘦𝘦𝘥 becomes a real problem. Embeddings are typically high-dimensional vectors of high precision, and reducing this precision sometimes can significantly lower memory usage without impacting much the search quality since high-dimensional embeddings tend to contain noise irrelevant to the task. Vector databases usually use 𝐪𝐮𝐚𝐧𝐭𝐢𝐳𝐚𝐭𝐢𝐨𝐧, which reduces noise by smoothing out small variations in the data: ✅ 𝗕𝗶𝗻𝗮𝗿𝘆 𝗤𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻: the best combination in 𝘮𝘦𝘮𝘰𝘳𝘺 𝘳𝘦𝘥𝘶𝘤𝘵𝘪𝘰𝘯 & 𝘴𝘦𝘢𝘳𝘤𝘩 𝘴𝘱𝘦𝘦𝘥 optimization (up to 40x). 𝗳𝗹𝗼𝗮𝘁𝟯𝟮 → 𝗯𝗶𝘁: values greater than zero become 1, others become 0. 👉 https://lnkd.in/gCH76i2D ✅ 𝗦𝗰𝗮𝗹𝗮𝗿 𝗤𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻: a good balance between keeping 𝘴𝘦𝘢𝘳𝘤𝘩 𝘢𝘤𝘤𝘶𝘳𝘢𝘤𝘺 high and reducing 𝘮𝘦𝘮𝘰𝘳𝘺 𝘶𝘴𝘢𝘨𝘦 (75% compression). 𝗳𝗹𝗼𝗮𝘁𝟯𝟮 → 𝘂𝗶𝗻𝘁𝟴: neural embeddings often cover a small subrange represented by the float numbers. We can bound this range, keeping X% of the most probable values. Since the uint8 range is also bounded, conversion between ranges is straightforward. 👉 https://lnkd.in/dyauejf8 ✅ 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗤𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻: the highest 𝘮𝘦𝘮𝘰𝘳𝘺 𝘳𝘦𝘥𝘶𝘤𝘵𝘪𝘰𝘯 (up to 64x with Qdrant). 𝗳𝗹𝗼𝗮𝘁𝟯𝟮 → 𝘂𝗶𝗻𝘁𝟴: Vectors are split into a selected number of chunks, each corresponding chunk replaced by the ID of the nearest centroid from a set of 256, defined by the K-means algorithm. 👉 https://lnkd.in/d3pGex9H ❗️We recommend avoiding Product Quantization unless extreme 𝘙𝘈𝘔 𝘳𝘦𝘥𝘶𝘤𝘵𝘪𝘰𝘯 is needed, as this quantization noticeably reduces precision and often slows down searches since the distance calculations between vectors requires remapping cluster indices back to float32, which is not SIMD-friendly. ❗️Qdrant stores the original, non-quantized vectors on disk. This allows rescoring the top-k results using the original vectors after the search in the quantized space. If 𝘥𝘪𝘴𝘬 𝘴𝘱𝘢𝘤𝘦 𝘳𝘦𝘥𝘶𝘤𝘵𝘪𝘰𝘯 is needed, consider storing data in a different datatype (e.g., float16 or unit8 instead of the default float32). 🤔 These three quantization methods are not the only ones available, of course. For example, 𝗥𝗲𝘀𝗶𝗱𝘂𝗮𝗹 𝗾𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻 is said to be particularly effective for unstructured vectors, potentially outperforms product quantization in search accuracy and cancels the need for rescoring. Have you used it? What has your experience been like? 👉 https://lnkd.in/dFBRAHjU

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    We’re thrilled to officially welcome Juan Carmona to the Qdrant team as a Visual Designer! Based in Medellín, Colombia, Juan has already been making a significant impact on our design efforts for some time now. His expertise in Visual Design, Web Design, and Product Design is helping us create more user-friendly and visually appealing experiences. On a personal note, Juan is an active member of design communities and loves creating content for other enthusiastic designers. Welcome aboard! 🎉

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    🤝 Kern AI & Qdrant: 𝗔𝗜 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗙𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗦𝗲𝗰𝘁𝗼𝗿 Kern AI, creators of a data-centric, low-code platform for AI-based solutions, developed a RAG-based chatbot for first-level support teams in insurance companies such as Markel 𝗜𝗻𝘀𝘂𝗿𝗮𝗻𝗰𝗲 𝗦𝗘. It reduces the average response times from five minutes to under 30 seconds per customer query while maintaining hallucination rates under 1%. Kern AI chose Qdrant for their chatbot due to its: ✅ Multi-vector Storage ✅ Functionality of Hybrid Search & Filtering ✅ Easy Setup 👉 Check it out https://lnkd.in/dgVfEdaS

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    📚 𝐏𝐚𝐫𝐭 #1 𝐨𝐟 𝐭𝐡𝐞 𝐬𝐞𝐫𝐢𝐞𝐬 “𝐅𝐫𝐨𝐦 𝐩𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐢𝐧𝐠 𝐢𝐧 𝐏𝐲𝐭𝐡𝐨𝐧 𝐭𝐨 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐢𝐧 𝐑𝐮𝐬𝐭.” Amazing detailed step-by-step guide on creating a basic recommendation system for books. It contains a lot of good practices of using Qdrant, including batch points upsert https://lnkd.in/dXBf6hR4 🔗 Check it out https://lnkd.in/dBgCXRcq

    • Kein Alt-Text für dieses Bild vorhanden
  • Qdrant hat dies direkt geteilt

    Profil von Andre Zayarni anzeigen, Grafik

    Co-founder at Qdrant, Vector Database.

    Jina AI just released 𝑱𝒊𝒏𝒂 𝑪𝒐𝒍𝑩𝑬𝑹𝑻 𝒗2, a Multilingual Late Interaction Retriever for Embedding and Reranking. The new model supports 89 languages with superior retrieval performance, user-controlled output dimensions, and 8192 token-length. 🚀 You can start using the new model right away with Qdrant Query API. See code examples in the official announcement blog: https://lnkd.in/dB4TZsHz

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    InfinyOn Fluvio - An open-source platform for high-speed, real-time data pipelines written Rust 🦀 - touché, 𝐧𝐨𝐰 𝐬𝐮𝐩𝐩𝐨𝐫𝐭𝐬 𝐐𝐝𝐫𝐚𝐧𝐭 𝐚𝐬 𝐚 𝐬𝐢𝐧𝐤 𝐝𝐞𝐬𝐭𝐢𝐧𝐚𝐭𝐢𝐨𝐧. Fluvio is cloud-native and designed to work with any infrastructure type, from bare metal hardware to containerized platforms. The sink connector streams data from Fluvio topics into Qdrant collections, leveraging Fluvio's delivery guarantees and high throughput. 🔗 You can learn to use the connector from our integration docs: https://buff.ly/3Z3p6D9 🔗 Find the connector on GitHub: https://buff.ly/4e2nxcS

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von Qdrant anzeigen, Grafik

    24.677 Follower:innen

    The Trieve team has launched a discovery focused search engine for Hacker News, powered by Qdrant. Designed to deliver the most relevant results, even for very specific queries. 🔗 Check it out and see the difference: https://hn.trieve.ai

    Unternehmensseite von Trieve anzeigen, Grafik

    338 Follower:innen

    Our discovery focused search for Hacker News shipped this morning! We added several features including site filters, public analytics, RAG AI chat, recommendations, and semantic search. 🔎⚡ ghoomketu, a HN user, had this to say about it — “This is impressive! I've frequently encountered challenges with Algolia search not locating specific items, but this appears to offer a much more detailed search capability. I've bookmarked this site and hope it remains available when I need it, unlike many great Show HN posts that vanish after six months or so.” https://hn.trieve.ai

    Trieve HN Discovery

    Trieve HN Discovery

    hn.trieve.ai

Ähnliche Seiten

Jobs durchsuchen

Finanzierung

Qdrant Insgesamt 3 Finanzierungsrunden

Letzte Runde

Serie A

28.000.000,00 $

Weitere Informationen auf Crunchbase