The 2024 presidential vote will undoubtedly have a significant impact on several fronts, and artificial intelligence is no exception.
EY’s latest technology pulse poll, published in October, revealed that 74 % of 503 tech leaders expect the election to impact AI regulation and global competitiveness. Although technology industry leaders stated their intention to considerably improve AI investments in the upcoming season, the development of AI does depend on the outcome of the election.
Responders believe the outcome of the election will mainly affect regulation related to cybersecurity/data protections, AI and machine learning, and consumer data and glad supervision.
” Of program, all of these are closely tied to development, growth and international competitiveness,’ ‘ James Brundage, EY worldwide &, Americas tech industry leader, told TechRepublic”. The U. S. is the nation’s it development leader, so potential tech policy may strike a balance that supports U. S. innovation while establishing guardrails where they are needed,” such as in data privacy, children’s website safety, and regional security.
SEE: Year-round IT budget template ( TechRepublic Premium )
greater Artificial investment is needed
Despite the result of the presidential election, technology companies will continue to invest heavily in AI, the survey indicates. Nevertheless, the result may affect the way of governmental, tax, tariff, competitive, and governmental policies as well as interest rates, mergers and acquisitions, initial public offerings, and AI regulations, the survey said.
” We were surprised that trade/tariffs were no higher up on the minds of these managers,’ ‘ Brundage observed.
He cited the slow software market in 2024 as evidence that” the 2025 path is bullish, as businesses focus on raising money to invest in growth and emerging technologies like AI.”
The majority of tech leaders ( 82 % ) stated that their businesses intend to invest 50 % or more in AI in the upcoming year. In the next year, AI investments will focus on key areas including AI-specific talent (60 % ), cybersecurity ( 49 % ), and back-office functions ( 45 % ).
Most software industry leaders surveyed even plan to manage resources to AI investments in the next six to twelve months, with 78 % of them stating that their company is considering deposing non-core assets or companies as part of their development strategy during that time.
Large businesses struggling with AI efforts
Brundage also finds it surprising that 63 % of tech officials claim that their company’s AI initiatives have successfully advanced to the implementation stage.
” That variety seems high, but many factors may explain it,” he noted”. Second, companies may get focusing on short-term, low-hanging fruits Artificial projects, which are easier to implement, had higher success rates, but may not be the opportunities with greatest impact.”
Further, use of” quick-buy solutions like ChatGPT or Copilot, which are relatively simple to deploy and drive productivity, may inflate this percentage. ” Also, successful implementation” likely means moving from proof of concept ( POC ) to implementation,” Brundage said, adding that” real challenges such as data quality, scaling, governance, and infrastructure still lie ahead.”
Additionally, size matters, according to the report, which found that larger companies had fewer success moving AI initiatives to the implementation stage.
According to those who claimed that fewer than half of their AI initiatives have been implemented successfully, data quality issues ( 40 % ) and talent/skills shortages ( 34 % ) are the most prevalent causes of AI initiatives failing to advance to the next stage.
How the election’s impact on AI could be felt
Given that the Federal Trade Commission and the Department of Justice have been very active and may continue to be so, according to Brundage, there may still be a continuation of the current regulatory and enforcement trends in relation to AI regardless of who takes office in 2025. Given that” some legislative proposals are bipartisan … we expect that they will advance in 2025 or 2026,” such as children’s online safety.
However, he noted that” state legislatures and attorneys general also influence policy,” making it a nuanced playing field. We expect these changes to be measured in years, not months.”
Tech leaders must realize that, according to Brundage, the United States is in a different geopolitical environment than it was five to ten years ago.
Business action is being sparked by new government industrial policy in the U.S. and around the world, both in the tech sector and in the relying industries and supply chains. These world-class tech companies are at the forefront of geopolitics as nations try to avoid conflict.
He claimed that AI capabilities have also become highly competitive and geopolitically significant throughout the world. There is a dual race to innovate and regulate here in the U. S. and elsewhere. We believe there is a need for business models that take into account the various regulatory strategies, such as sovereign frontier models.
Wanted: AI tech talent search intensifies
According to the survey, organizations will need to hire more AI-specific talent as more organizations continue to incorporate more AI functionality into their businesses, as well as the need to restructure or reduce headcount from legacy job functions.
According to the survey, 77 % of tech leaders anticipate an increase in hiring for AI-specific talent, and 83 % of them anticipate reducing or restructuring headcount from legacy to other in-demand functions. Additionally, 40 % of technology leaders said human capital efforts such as training will be the focus of their company’s AI investments next year.
AI’s effects on foreign policy and national security
Meanwhile, the Biden administration on Thursday released the first-ever AI-focused national security memorandum ( NSM) to ensure that the U. S. continues to lead in the development and deployment of AI technologies. In order to ensure the use of AI while also protecting privacy, human rights, civil rights, and civil liberties, the memorandum places a premium on how the nation chooses to use it and use it.
The NSM also calls for the development of a governance and risk management framework for how organizations implement AI, as well as for requirements for them to monitor, assess, and mitigate AI risks related to those issues.