Common Sense Media rated Ello’s AI reading coach among the top 10 ethical AI products, awarding it high marks for privacy and safety. Recognized for its Responsible AI practices and unique AI tutor design, Ello addresses child literacy with advanced speech recognition technology. It offers personalized reading experiences, aligning books with children’s interests and reading levels.
Microsoft plans to upgrade Bing AI, renamed Copilot, with GPT-4 Turbo, promising enhanced accuracy and speed in responses. Although still resolving some issues, this upgrade is significant, as discussed on X (formerly Twitter) by Microsoft’s Mikhail Parakhin. GPT-4 Turbo, recently announced by OpenAI, will also be more cost-effective and feature the latest training data.
Arab Finance for Information Technology, collaborating with partners like Optofolio and Dahab Masr, launched GuROW to attract young investors. This app offers news services, AI financial advice, and diverse investment tools, aiming to advance the Egyptian investment market with technology. Available on iOS and Android, GuROW integrates various services to enhance user experience in financial markets.
The AI field is witnessing a significant shift towards General AI (AGI), which matches adult human intelligence and can make independent decisions. This advancement raises concerns about autonomy and potential risks to humanity. The situation involving Sam Altman of OpenAI underscores the seriousness of AGI’s impact, emphasizing the urgent need for regulatory frameworks to manage these emerging technologies.
Militaries globally, including the U.S. Pentagon’s “Replicator” program, are advancing towards autonomous weapons, aiming to deploy AI-driven drones by 2026. This shift towards technology-driven warfare maintains a focus on human oversight, especially concerning lethal force. International discussions are ongoing to set legal and ethical boundaries for AI in warfare, highlighting the balance between technological advancement and responsible use.
The United States, Britain, and 16 other countries have released a 20-page agreement outlining guidelines for AI safety, focusing on “secure by design” systems to prevent misuse. This non-binding document, a significant step in multinational collaboration, emphasizes monitoring AI for abuse and protecting data, but does not delve into ethical use or data sourcing. Europe leads in AI regulation, while the U.S. faces challenges due to political divisions.
The Pentagon’s Replicator program aims to field thousands of AI-enabled autonomous vehicles by 2026, marking a significant shift in U.S. military strategy towards smaller, smarter, and cost-effective technologies. This initiative anticipates the development of autonomous lethal weapons, raising crucial decisions about AI’s maturity and trustworthiness, especially as global powers like China and Russia advance their military AI without committing to responsible use.
Steven Nerayoff, a former Ethereum advisor turned critic, has accused Ethereum co-founder Vitalik Buterin of manipulating the network for personal gain and hinted at secret dealings with U.S. officials. Despite these allegations, Nerayoff plans to launch an AI-driven Web3 project, combining his decade-long AI experience with the ethos of cryptocurrency. He intends to pursue legal action against Ethereum while emphasizing the importance of truth over wealth.
Dr. Akbar Niazi Teaching Hospital in Islamabad has partnered with a Chinese tech firm to use AI for detecting cervical cancer in Pakistani women. With cervical cancer being the third most frequent cancer in Pakistani women, this initiative aims to improve early detection and tackle the high mortality rate caused by late-stage diagnoses and inadequate screening knowledge.
Sony is testing a new authentication technology in its cameras, in collaboration with the Associated Press and Camera Bits, to differentiate real photos from AI-generated images. This technology embeds a digital signature at the moment of capture, acting as a “birth certificate” for images. Set for a Spring 2024 release in select Sony cameras, it aims to combat fake imagery and protect photographers’ copyrights.
The Shape of Threats to Come: The Onslaught of Hacktivism, AI-based Attacks and Weaponised Deepfakes
In 2024, cybersecurity faces major shifts with AI and ML increasingly used by attackers and defenders. Key predictions include a rise in AI-powered attacks, cloud-based AI resource targeting, more supply chain and infrastructure attacks, AI’s impact on cyber insurance, ongoing hacktivism and nation-state attacks, weaponized deepfakes, sophisticated phishing, and a surge in ransomware and ‘living off the land’ tactics.
The U.S. Department of Homeland Security’s CISA and the UK’s National Cyber Security Centre have released the “Guidelines for Secure AI System Development,” in collaboration with 21 global agencies. This pioneering global agreement focuses on integrating cybersecurity into the AI system development lifecycle. It emphasizes ‘secure by design’ principles, aiming to foster safe, secure, and trustworthy AI systems amid growing global digital threats.
The Google Pixel 8 Pro is receiving its first update for the AI Core app, exclusively enhancing AI-driven features on the device. Operating in the background, AI Core powers advanced functionalities like scene detection, Google Assistant responses, and personalized recommendations. It also manages on-device AI models for features like Magic Eraser and Photo Unblur. This update, specific to Pixel 8 Pro users, promises a more seamless and intelligent user experience.
Over a dozen countries, including the U.S., have agreed to a groundbreaking international pact prioritizing “secure by design” AI systems to protect against misuse. This non-binding agreement, emphasizing customer and public safety in AI development, aligns with recent European efforts for AI regulation and complements the U.S. executive order focusing on AI standards, safety guidelines, and content identification.
The UK, US, and 16 other countries have introduced an agreement to ensure AI systems are “secure by design,” aiming to prevent misuse by rogue actors. This 20-page, non-binding document focuses on safety-first principles in AI development, highlighting the need for monitoring and vetting processes. The initiative reflects a global effort to responsibly shape AI’s future amidst Europe’s lead in AI regulation and challenges in U.S. legislative progress.