ChatGPT is being challenged left and right. The EU finalizes the AI Act to regulate AI’s risks in critical areas, requiring high-risk AI to undergo thorough assessments. Elon Musk’s xAI launches Grok, a ChatGPT rival, limited to X premium users, and seeks significant funding. The U.S. FTC raises concerns about AI in copyright violations and bias, while Google introduces Gemini, an advanced AI model, enhancing its language model, Bard.
Google’s Project Ellmann, an extension of its Gemini AI model, is poised to be a highly personalized AI assistant. It plans to use personal data like photos and documents to offer tailored interactions, potentially outperforming ChatGPT. This ambitious project raises significant privacy concerns due to Google’s extensive data access and the sensitive nature of the personal information involved.
Google’s Gemini Pro, in Bard AI Chatbot, was tested against OpenAI’s ChatGPT 3.5 and GPT4. It showed unique response strategies and excelled in historical data interpretation and conflict resolution. However, its creative writing was less impressive compared to ChatGPT models. Gemini Pro, currently English-only, anticipates an upgraded multimodal version, Gemini Ultra, in 2024.
Google’s new Gemini Ultra AI outperforms GPT-3.5, excelling in 30 of 32 LLM benchmarks and scoring 90% on MMLU. It processes various media types and is offered in three versions: Ultra, Pro, and Nano. Amidst its advancement, legal and ethical concerns arise regarding the use of creators’ works without permission or compensation, highlighting a growing challenge in AI development and copyright laws.
University of Technology Sydney researchers have introduced a non-invasive EEG cap capable of translating thoughts into text. This innovative device, using the DeWave AI model, interprets brainwaves into words without invasive surgery. Although its current efficiency is 40%, it marks a significant step in brain-to-text technology, potentially revolutionizing communication methods, especially for those unable to speak.
The New Stack, a technology news outlet, acknowledges the challenge generative AI poses to journalistic credibility. With public trust in media declining, they emphasize maintaining integrity despite AI advancements. Their strict policy prohibits AI-generated content in articles, insisting on original, human-written material and independent fact verification, upholding editorial integrity and reader trust.
The filing presents chat logs from a Meta-affiliated researcher discussing the acquisition of the dataset in a Discord server. These logs serve as potential evidence indicating Meta’s awareness of potential legal infringement. The conversation, cited in the complaint, showcases a back-and-forth dialogue between researcher Tim Dettmers and Meta’s legal department
The Arena Group dismissed CEO Ross Levinsohn after Sports Illustrated, owned by Arena, faced backlash over alleged AI-generated content use. Articles from a third-party provider were removed from SI.com amidst an internal probe. Manoj Bhargava, majority stakeholder, steps in as interim CEO, following Levinsohn’s exit and other high-level terminations.
AppDirect announced its AI Marketplace, currently in beta, which allows partners to create no-code chatbots. These bots utilize data from AppDirect’s marketplace and other sources, aiming to streamline sales processes for technology advisors. The platform plans to expand in 2024, enhancing advisor and customer experiences in the technology marketplace.
Shamaine Daniels, a Pennsylvania Democrat, integrates an AI phone-banking tool, Ashley, into her congressional campaign. Developed by Civox, Ashley interacts with voters, discussing Daniels’ policies using generative AI. This innovative approach, a first in political campaigns, raises concerns about data security and voter trust amid its real-world testing.