Search
+
    SEARCHED FOR:

    REGULATING BIAS IN AI

    Financial industry grappling with AI's gifts and perils, executives say

    Zack Kass, a former head of business partnerships at OpenAI said AI systems could be better than humans at explaining to clients why they arrived at recommendations like portfolio allocations or lending decisions. He added that people are not good at explaining subconscious biases that could affect such decisions.

    AI companies train language models on YouTube's archive − making private videos a privacy risk

    Exploring the intricate nexus of big tech self-regulation, privacy concerns, and legal safeguards in AI development underscores the imperative for ethical AI model training and deployment.

    Pope Francis will be the first pontiff to address a G7 summit. He's raising the alarm about AI

    Francis will address G7 leaders on Friday at their annual gathering in southern Italy - a first for a pope. He intends to use the occasion to join the chorus of countries and global bodies pushing for stronger guardrails on AI following the boom in generative artificial intelligence kickstarted by OpenAI's ChatGPT chatbot.

    Emergence & Application of Generative AI in Insurance
    EU data protection board says ChatGPT still not meeting data accuracy standards

    EU task force criticizes OpenAI's efforts to improve accuracy of ChatGPT output in compliance with data rules.

    Valley must respect (Beverly) Hills

    Creators of AI will have to be especially careful over copyright, given the scope for bias inherent in the technology. Bias enters through human input. But AI can amplify it. Creative people lending names, faces or voices to such endeavour would be justified in seeking protection to their livelihood apart from regular compensation. This would also apply to content AI trains upon, such as the work of writers, directors and musicians.

    • Equipping leaders for the AI era: A CEO's toolkit

      The critical juncture when AI is taking over the world slowly, a proactive approach and good understanding of Generative AI is required from leadership -- CEOs & CIOs.

      Pay for content, slice for responsible AI

      AI deals with media outlets like WSJ and FT help tech firms avoid conflicts. Google pays WSJ for new AI content. Need for regulation in transformative tech. Fair compensation for content creators essential for responsible AI training.

      Climate change poses a child labour ‘threat multiplier’

      At least 160 million children, or one in 10, are part of the global workforce—and climate change is a “threat multiplier,” according to recent research published by the International Labour Organization.

      View: AI needs to be kept under an IA eye

      Gartner's report predicts that global end-user spending on security and risk management (SRM) is expected to rise by 14.3% in 2024, with security services being the largest area of total SRM end-user spending at 42%. The increasing adoption of AI by firms necessitates the security of AI systems from threats and vulnerabilities. Internal audit (IA) plays a crucial role in ensuring AI security controls are established and effective in mitigating risks.

      First major attempts to regulate AI face headwinds from all sides

      Lawmakers are tackling AI bias challenges with impact assessments, supported by industry groups like TechNet. Concerns exist over companies self-reporting discrimination risks, highlighting the need for transparency and accountability.

      Keeping AI's future open: A key to ethical and inclusive AI innovation and governance

      The rapid evolution of AI presents a unique opportunity to merge cultures, ideas, and knowledge into a unified canvas of innovation. With AI predicted to inject $15.7 tn into the global economy, its role extends beyond economic growth to societal transformation. Policymakers worldwide are focusing on a broader, more inclusive approach to governance that transcends conventional regulatory strategies.

      View: How to future-proof AI regulation

      The Ministry of Electronics and Information Technology (MeitY) issued a revised AI advisory on March 15, overturning a provision from the March 1 version that required intermediaries to obtain government approval before launching generative AI or other AI deployments. The original advisory raised concerns about its legality and stifling AI innovation due to its unclear scope and broad control assigned to the government.

      European regulators crack down on Big Tech

      Apple, Meta Platforms and Alphabet's Google could receive hefty fines by the end of the year for alleged breaches including disparaging rival products on their platforms.

      Govt withdraws mandate requiring AI models to seek approval before deployment

      In a fresh advisory issued on Friday, the ministry of electronics and information technology said that unreliable AI foundational models, LLMs, generative AI software or algorithm or any such model should be made available to Indian users only after “appropriately labelling the possible inherent fallibility or unreliability of the output generated,” the advisory read.

      View: AI regulation must be more nuanced than set out in GoI’s half-baked advisory

      A recent advisory says that under-testing or unreliable AI models that are available to users on the 'Indian internet' must take GoI's explicit permission, and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated. But this is problematic.

      Stop throttling our techonomy

      MeitY issued an 'advisory' to the effect that any 'under-testing' AI platform would need the 'explicit permission of GoI' before being allowed to be deployed on the Indian internet. While an advisory has no binding legal effect, Indian firms have been warned that this 'signals the future of regulation'.

      Won’t tolerate AI biases, onus on Google to train models: Ashwini Vaishnaw

      This comes days after a post on X claimed that Google’s Gemini was biased when asked if PM Modi was a “fascist”. The user claimed that Google’s AI GPT model Gemini was "downright malicious" for giving responses to questions which sought to know whether some prominent global leaders were ‘fascist’.

      Writing the new rules for AI

      At the Global Partnership for Artificial Intelligence summit in 2023 in New Delhi, Prime Minister Narendra Modi stressed on the importance of creating a global framework for ethical use of AI, including a protocol for testing and deploying high-risk and frontier AI tools. Earlier, at the first global AI Safety Summit 2023 at Bletchley Park, 28 countries gave a call for international cooperation to manage the challenges and risks of AI.

      US regulators add artificial intelligence to potential financial system risks

      The Financial Stability Oversight Council, which comprises top financial regulators and is chaired by Treasury Secretary Janet Yellen, flagged the risks posed by AI for the first time in its annual financial stability report.

      We need to walk the TAItrope

      In a significant move on October 30, President Joe Biden issued an executive order focusing on the 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.' This executive order outlines a comprehensive approach to AI, addressing key aspects such as establishing standards for development, promoting transparency, ensuring accountability, and tackling potential biases and discrimination within AI systems.

      EU AI Act to serve as blueprint for global rules: lawmaker Brando Benifei

      While several countries have been looking at ways to regulate AI, European lawmakers have taken a lead by drafting AI rules aimed at setting a global standard for a technology key to almost every industry and business. The draft rules could get approved by next month.

      More (intelligent) power to the people

      What we know so far about AI gives us a simple playbook about privacy, bias and disclosure. What we don't know about AI makes human oversight diabolically difficult. Take jobs. Where does a country draw the line on the kind of jobs machines can be allowed to take over? Populous, poorer nations will have one set of responses. Underpopulated, rich countries will have a different set. Both will be correct.

      With executive order, White House tries to balance AI's potential and peril

      On Monday, the White House announced its own attempt to govern the fast-moving world of AI with a sweeping executive order that imposes new rules on companies and directs a host of federal agencies to begin putting guardrails around the technology.

      White House's new order to mitigate AI risks will involve wide-ranging action

      The executive order Joe Biden will unveil is the latest step by the administration to set parameters around AI as it makes rapid gains in capability and popularity in an environment of, so far, limited regulation.

      Microsoft to spend $3.2 billion in Australia as AI regulation looms

      Microsoft said it will spend A$5 billion ($3.2 billion) expanding its artificial intelligence (AI) and cloud computing abilities in Australia over two years as part of a wide-ranging effort that includes skills training and cyber security.

      Biased bots? US lawmakers take on 'Wild West' of AI recruitment

      Derek Mobley, a Black man with a finance degree, has filed a class action lawsuit against human resources platform Workday Inc, alleging that the platform's algorithm discriminates against Black, disabled, and older job applicants. The lawsuit is part of a larger battle to regulate the use of artificial intelligence (AI) in the US recruitment market.

      Biased bots? US lawmakers take on 'Wild West' of AI recruitment

      The question of what "responsible AI" might look like goes to the heart of an increasingly robust push-back against the unrestricted use of automation in the US recruitment market.

      Why it's vital to consistently evaluate AI systems through a more humane approach to deal with bias

      AI systems could be consistently evaluated and tested against predetermined indicators for bias, and results be made public to incentivise improvements and provide necessary information to other stakeholders engaging with such systems. Such regulatory frameworks need not be hardcoded.

      Load More
    The Economic Times
    BACK TO TOP