29- December-2023
The year ending 2023 showcased the hopes and fears around AI as transformative technology went mainstream.
The artificial intelligence (AI) industry began 2023 with a bang as schools and universities struggled with students using OpenAI’s ChatGPT to help them with homework and essay writing.
Less than a week into the year, New York City Public Schools banned ChatGPT – released weeks earlier to enormous fanfare – a move that would set the stage for much of the discussion around generative AI in 2023.
As the buzz grew around Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, so did questions about how to handle a powerful new technology that had become accessible to the public overnight.
While AI-generated images, music, videos and computer code created by platforms such as Stability AI’s Stable Diffusion or OpenAI’s DALL-E opened up exciting new possibilities, they also fuelled concerns about misinformation, targeted harassment and copyright infringement.
In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, called for a pause in the development of more advanced AI in light of its “profound risks to society and humanity”.
While a pause did not happen, governments and regulatory authorities began rolling out new laws and regulations to set guardrails on the development and use of AI.
While a pause did not happen, governments and regulatory authorities began rolling out new laws and regulations to set guardrails on the development and use of AI.
While many issues around AI remain unresolved heading into the new year, 2023 is likely to be remembered as a major milestone in the history of the field.
In August, the International Labour Organization, the UN’s labour agency, said that generative AI is more likely to augment most jobs than replace them, with clerical work listed as the occupation most at risk.
Year of the ‘deepfake’?
The year 2024 will be a major test for generative AI, as new apps come to market and new legislation takes effect against a backdrop of global political upheaval.
Over the next 12 months, more than two billion people are due to vote in elections across a record 40 countries, including geopolitical hotspots like the US, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan
In a survey of 305 developers, policymakers, and academics carried out by the Pew Research Center in July, 79 percent of respondents said they were either more concerned than excited about the future of AI, or equally concerned as excited.
Despite AI’s potential to transform fields from medicine to education and mass communications, respondents expressed concern about risks such as mass surveillance, government and police harassment, job displacement and social isolation.
Sean McGregor, the founder of the Responsible AI Collaborative, said that 2023 showcased the hopes and fears that exist around generative AI, as well as deep philosophical divisions within the sector.
“Most hopeful is the light now shining on societal decisions undertaken by technologists, though it is concerning that many of my peers in the tech sector seem to regard such attention negatively,” McGregor told Al Jazeera, adding that AI should be shaped by the “needs of the people most impacted
While OpenAI has tried to move on from the drama, the questions raised during the upheaval remain true for the industry at large – including how to weigh the drive for profit and new product launches against fears that AI could grow too powerful too quickly, or fall into the wrong hands
Drama at OpenAI
After ChatGPT amassed more than 100 million users in 2023, developer OpenAI returned to the headlines in November when its board of directors abruptly fired CEO Sam Altman – alleging that he was not “consistently candid in his communications with the board
While online misinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make matters worse as false information becomes increasingly difficult to distinguish from the real thing and easier to replicate at scale.
AI-generated content, including “deepfake” images, has already been used to stir up anger and confusion in conflict zones such as Ukraine and Gaza, and has been featured in hotly contested electoral races like the US presidential election.
Meta last month told advertisers that it will bar political ads on Facebook and Instagram that are made with generative AI, while YouTube announced that it will require creators to label realistic-looking AI-generated content.
Legislating the future
In December, European Union policymakers agreed on sweeping legislation to regulate the future of AI, capping a year of efforts by national governments and international bodies like the United Nations and the G7.
Key concerns include the sources of information used to train AI algorithms, much of which is scraped from the internet without consideration of privacy, bias, accuracy or copyright.
The EU’s draft legislation requires developers to disclose their training data and compliance with the bloc’s laws, with limitations on certain types of use and a pathway for user complaints.
Similar legislative efforts are under way in the US, where President Joe Biden in October issued a sweeping executive order on AI standards, and the UK, which in November hosted the AI Safety Summit involving 27 countries and industry stakeholders.
SOURCE: AL JAZEERA