Deep Detection

Deep Detection

IT-Dienstleistungen und IT-Beratung

Troy, NY 560 followers

We invent, develop, and deploy novel AI to detect, predict, track, unmask, and defend against criminal activity.

Über uns

Deep Detection, LLC operates, as you might have guessed, in the realm of detective work. This detective work includes the production and deployment of intelligent software systems capable of predicting, tracking, & defending against cybercrime, financial fraud, medical fraud, and other similar criminal acts.

Website
https://www.deepdetection.com
Industrie
IT-Dienstleistungen und IT-Beratung
Größe des Unternehmens
2-10 Mitarbeiter
Hauptsitz
Troy, NY
Typ
In Privatbesitz
Gegründet
2015

Standorte

Employees at Deep Detection

Aktualisierungen

  • Deep Detection reposted this

    AI Cyber Challenge competitors are creating the future of AI-driven cybersecurity and advancing systems to protect modern life from real-world vulnerabilities. We thank all competitors for their hard work and look forward to the next phase of the competition. Stay tuned to aicyberchallenge.com for updates and for more content from the semifinal event, #ICYMI.

  • Deep Detection reposted this

    View profile for Konstantine Arkoudas, graphic

    Chief AI Officer & Architect

    My recent -- and ongoing -- series of articles on AI ethics and regulation was launched a few weeks ago by posing a simple question as a starting point for exploration: On the one hand we have faulty software practices, which over the years have caused many hundreds of deaths, thousands of injuries, trillions of dollars in damages, and tremendous amounts of inconvenience inflicted on the general public; and on the other hand we have the gamut of issues underpinning the various and sundry calls for ethical AI governance and for AI regulation. These latter issues range from the use of biased AI models in the workplace or for facial recognition or credit decisions to the threats posed by LLMs and fears about killer robots. These concerns have remained largely speculative and have had very limited impact on society so far, as the articles in this series have already shown in detail. Yet public discourse has been obsessed with the risks of AI for over a decade now (a trend that intensified severely after the appearance of LLMs) while virtually ignoring the much more destructive problem of poor software engineering practices. What accounts for the disparity? The attention given to the two issues seems inversely proportional to their impact. Why is that? The question is particularly apropos in view of the recent CrowdStrike debacle, which seems to have been caused by a good old code bug -- "a logic error", as CrowdStrike itself put it in a blog that they published today (https://lnkd.in/gjnWpFMr). Yet in a week or two the incident will have largely faded from our collective memory, relegated to the footnotes of history, while the debate about AI ethics and killer robots will still be raging like a wildfire. The fourth article in the series, which was just posted on Substack, dives into that puzzling disparity in earnest and explores whether there is something truly exceptional about AI that justifies the recent panic about it and warrants a more stringent approach to its regulation. In the process of trying to answer that question, the article touches on a variety of relevant points, from problems with continual learning and catastrophic forgetting to AML (anti-money laundering) regulations and blockchain to singularity scenarios, the recent change of heart by Bengio and Hinton regarding such scenarios, and the use (and abuse) of the precautionary principle in making public policy. The precautionary principle is a particularly rich and interesting topic, so a separate article devoted exclusively to it will be published as the fifth installment in the series next week. For now, here's the fourth installment: https://lnkd.in/gxrRgiGj

    Technical Details: Falcon Update for Windows Hosts | CrowdStrike

    Technical Details: Falcon Update for Windows Hosts | CrowdStrike

    crowdstrike.com

  • Deep Detection reposted this

    View profile for Matthew Fauci, graphic

    Army Veteran | CyberSecurity student | IT Analyst | Research Assistant | President's list | Trustees' List

    I am celebrating my new certification! This is the course for those of you who want a solid foundation and who are looking to get a discount on the CompTIA Sec+ certification. This certification is just one step on my path into Cybersecurity. I am looking forward to continuing on this path and learning from those who are looking to pass on their knowledge. I would also like to thank Hire Heroes for getting me into this program and helping veterans like me everywhere! #google #cyber #HireHeroesUSA #veterans #growth

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • Deep Detection reposted this

    View profile for Alexander Bringsjord, graphic

    Entrepreneur || Inventor || Researcher || Published Author || System Architect

    It has been almost four decades, & my Dad is still taking me to school! 🤷♂️ Hope you enjoy the podcast series from Audible. Definitely worth your time. Thanks for being you Dad. So proud to be your son. ❤️ 🇳🇴 💪 📚 #Philosophy #Logic #AI

    Exploring the Immaterial: A Conversation with Dr. Selmer Bringsjord

    Exploring the Immaterial: A Conversation with Dr. Selmer Bringsjord

    audible.com

  • View organization page for Deep Detection, graphic

    560 followers

    The necessary paradigm shift to address problems with LLMs & #ML in general is dirt simple: Hybridize with computational logic! #AI #Logic #Ethics

  • Deep Detection reposted this

    View profile for Dr. Jeffrey Funk, graphic

    Technology Consultant

    Many “machine-learning experts don’t view hallucination as fixable because it stems from large language models (LLMs) doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them.”   The #researchers claim “today’s LLMs were never designed to be purely accurate. They were created to create—to generate.” “The reality is: there’s no way to guarantee the factuality of what is generated, adding that all computer-generated “creativity is hallucination, to some extent.”  One result: “The lowest hallucination rates among tracked AI models are around 3 to 5%.”   The author of a recent academic paper said “For any LLM, there is a part of the real world that it cannot learn, where it will inevitably #hallucinate.”   One main reason AI chatbots routinely hallucinate is because “they are trained to predict what should come next in a sequence such as a string of text. If a model’s training data include lots of information on a certain subject, it might produce accurate outputs. But LLMs are built to always produce an answer, even on topics that don’t appear in their training data.” The researcher “says this increases the chance errors will emerge.”   Another reason: “these massive models are trained on orders of magnitude more data than they can store—and data compression is the inevitable result. When LLMs cannot “recall everything exactly like it was in their training, they make up stuff and fill in the blanks,”   Other researchers argue that “reducing calibration can boost factuality while simultaneously introducing other flaws in LLM-generated text. Uncalibrated models might write formulaically, repeating words and phrases more often than a person would. The problem is that users expect #AI chatbots to be both factual and fluid.”   Is there a solution? One researcher says: “They are wonderful idea generators, but they are not independent problem solvers. You can leverage them by putting them into an architecture with verifiers—whether that means putting more humans in the loop or using other automated programs.”   The article concludes by saying that: “In a future where specialized systems verify LLM outputs, AI tools designed for specific contexts would partially replace today’s all-purpose models. Each application of an AI text generator (be it a customer service chatbot, a news summary service or even a legal adviser) would be part of a custom-built architecture that would enable its utility.”   Many companies are developing specialized systems, and thus implementation is taking longer and is more expensive than originally thought. And we don’t yet know how well these specialized systems will solve the problem of #hallucinations. #technology #innovation #hype #artificialintelligence #ethics https://lnkd.in/gbR6JhDK

    Hallucinations Are Baked into AI Chatbots

    Hallucinations Are Baked into AI Chatbots

    scientificamerican.com

Ähnliche Seiten

Jobs durchsuchen