What Happened
Last week, the White House published a memorandum on advancing US leadership in artificial intelligence. The national security-focused document is aimed at ensuring that the country leads the world’s development of “safe, secure, and trustworthy AI.”
Accompanying the memorandum was a 14-page framework to advance AI governance and risk management. National Security advisor Jake Sullivan called the memorandum “the nation’s first-ever strategy for harnessing the power and managing the risks of AI to advance our national security.”
InnoLead members may find the documents helpful or instructive as they create their own policies and frameworks.
The memorandum addresses numerous topics, many of which relate to national security and the development of ethical AI. We won’t tackle them all here, but will highlight a few bullets that intersect with the private sector:
The Departments of Defense, State and Homeland Security have been charged with using ‘all available legal authorities’ to attract and bring to the US individuals with AI expertise.
- Talent, Recruitment & Retention — First, the Departments of Defense, State and Homeland Security have been charged with using “all available legal authorities” to attract and bring to the US individuals with AI expertise. That will include charging relevant agencies with “streamlining administrative processing operations for all visa applicants working with sensitive technologies.” The Departments of State, Defense, Justice, Homeland Security, and others are also expected to revisit hiring and retention policies and strategies “to accelerate responsible AI adoption,” including education and training opportunities.
- Investment & Infrastructure — There are multiple efforts around the design and building of facilities to harness AI for research and intelligence. That will include interagency efforts to “streamline permitting, approvals, and incentives for the construction of AI-enabling infrastructure.” The Departments of Energy, State, Commerce and the Intelligence Community are also empowered to make “public investments and encourage private investments in strategic domestic and foreign AI technologies and adjacent fields.”
- Procurement — The DoD and the Office of the Director of National Intelligence are charged with creating a working group to address issues involving procurement of AI at the DoD and elsewhere. That group will focus on “simplifying processes such that companies without experienced contracting teams may meaningfully compete for relevant contracts.”
Again, there are myriad other components in the memorandum.
Why It Happened
Perhaps most interestingly to our readership, the memorandum states that all government agencies will need to appoint a Chief AI Officer.
The Chief AI Officer will be responsible for managing the agency’s use of AI, promoting AI innovation within the agency, and managing risks. The government published a separate 34-page memo on the roles and responsibilities of the Chief AI Officer.
Among other responsibilities, the “CAIO” will have to publish each of its AI use cases “at least annually,” unless they meet certain confidentiality requirements.
Each government agency also needs to appoint an AI Governance Board, and must publish guidance on AI risks and risk-management practices.
There is already much talk in the private sector about the necessity — or lack thereof — for a Chief AI Officer. While some InnoLead members have already posted job descriptions for their own CAIO, others have argued that the position is an enabling capability that doesn’t require a dedicated overseer. “Companies don’t have Chief Cloud Officers,” said one attendee of InnoLead’s Impact conference last month, “and they shouldn’t require Chief AI Officers either.”
And while some companies have indeed hired Chief AI Officers — including eBay, Dell, ADP, Entergy, and AllianceBernstein — the vast majority of CAIOs are at AI startups, vendors, service providers, and government agencies.
…At many companies the CAIO will be yet another ‘Faux C-Level Leader’ with a great title, no team, no measurable goals, insufficient resources, and a road map that steers them toward conflicts with other technology leaders.
What Happens Next
While government agencies are now mandated to have Chief AI Officers, at many companies the CAIO will be yet another “Faux C-Level Leader” with a great title, no team, no measurable goals, insufficient resources, and a road map that steers them toward conflicts with other technology leaders. Creating a CAIO may be done to check a box — or have a single “throat to choke” if an AI project goes off the rails. A handful of CAIO’s may be able to figure out how to use AI to help their organizations build actual competitive advantage — not just write AI policy docs or speak on panels.
Based on conversations with AI experts, the biggest questions companies need to answer before considering such a hire include the following:
- Remit: What is the specific mandate of the CAIO going to be? What exactly will they be responsible for, and how will it be measured? A Chief AI Officer responsible for training employees and developing a data science team, for example, will have very different objectives than a CAIO responsible for overseeing and evaluating real-world tests and tools across the enterprise.
- Turf Wars: Most organizations already have AI programs overseen by the CTO, the Chief Data Officer, and/or business unit leaders. Adding or appointing a separate Chief AI Officer can cause problems related to turf wars, communication, and decision-making. Unless the CAIO is simply overseeing policies, standards, literacy or governance, problems can quickly emerge related to departmental oversight of strategy and execution.
- Priorities: Artificial intelligence is not a strategy; rather, it is an enabling capability like data analytics, blockchain, or cloud computing. Elevating a Chief AI Officer can inaccurately broadcast to employees that AI is a C-level priority, when in fact it is not. This can lead to an abundance of AI initiatives for the sake of AI, each of which may lack a connection to core strategy or delivering business results.
- Deceleration: Centralizing AI under a specific title or function could lead to silo-ing, which in turn can slow innovation, experimentation and productivity. Artificial intelligence, particularly GenAI, is a fully democratized capability: Anyone in the company, at any level, can experiment with it. By signaling to the company that “only the Chief AI Officer” can utilize — or delegate use of — the capability, companies risk slowing the pace of innovation.
More Resources
InnoLead has posted AI policies, slides and job descriptions in our Resource Center, and has been actively producing case studies and articles on GenAI.
We also published a report on how AI is influencing corporate innovation priorities, and another on how AI is impacting innovation software. We’ve also hosted several webcasts and master classes on deploying GenAI in regulated industries, and how to prevent AI from being the next round of innovation theater.
Featured image by Jainam Sheth on Unsplash