Artifical Intelligence
Questions boards must
ask AI
Many boards aren’t always aware of what a successful AI strategy looks like, writes PROF. MARK NASILA, chief data and analytics officer at FNB.
Few business leaders dispute that artificial intelligence (AI) is on course to upend every major market sector and industry, but many boards aren’t always aware of what a successful AI strategy looks like, or how best to prepare their organisations for the change.
A survey conducted among IoD members in 2022 found that a significant majority of boards, approximately 80%, lacked a structured process to assess their utilisation of AI. The survey highlighted that boards were uncertain about the appropriate questions to pose regarding AI implementation. This absence of board oversight at the outset of AI projects is crucial to avoid potential ethical issues in the future, which could lead to reputational damage and financial consequences.
Recent research indicates a disconnect between board governance practices and the incorporation of AI within their businesses. Astonishingly, over 86% of companies are already employing some form of AI without the knowledge of their respective boards. As the applications of AI continue to expand, it becomes imperative for boards to comprehend both the opportunities and threats associated with AI.
Notably, AI has the potential to exacerbate existing biases present in human decision-making, underscoring the need for safeguards to prevent the perpetuation of bias within organisational systems or culture. An effective AI governance framework should be grounded in the fundamental ethical values of the business, and boards require a comprehensive model to guide their decision-making in this area.
Moreover, it’s essential for boards to have a clear understanding of their organisation’s stance regarding the ethical use of AI. AI must be given due consideration and be included in the board’s agenda concerning the principles of ESG (Environmental, Social, and Governance) — where it falls under the “governance header” — and the requirements of CSR (Corporate Social Responsibility) because of the social implications it can have.
The reflexive checklist – Asking a few crucial questions can provide a board-level understanding of where an organisation stands when it comes to the ethical deployment of AI.
- What’s the plan to develop a Business led AI strategy?
It is vital for organisations to evaluate the adequacy of their strategic-planning processes in integrating and considering AI within their strategy. This assessment should determine the necessity of additional strategic-planning sessions focused on AI and whether new processes or personnel are required for AI incorporation and board testing through strategic governance. Management should also assess various AI scenarios’ impact on the organisation, competitors, and industry, while ensuring the board has opportunities to understand, shape, and challenge management’s perspectives on AI.
- What are the opportunities and how can they be monetised?
Leadership needs to be able to identify not merely use cases for AI, but ways it can provide their organisation — or elements of the supply chain — with unique value-capturing opportunities or competitive advantages.
- Where could AI create the most significant and sustainable value?
To effectively navigate the impact of AI, boards need to comprehend the areas where management envisions its greatest potential and associated risks. This task involves developing a perspective that considers different timeframes, accounting for the current state of AI technology and anticipated advancements, and identifying potential advantages that may arise as AI continues to evolve.
- What are key customer demands AI could meet?
The risk associated with AI lies in companies perceiving it solely as a tool to enhance productivity, rather than recognising it as a transformative business paradigm that should align with customer requirements. Boards should pose critical questions regarding the impact of AI on the customer experience and the relationship between customers and the organisation. It is essential to understand how AI can enhance the customer experience, what customers specifically desire from AI, and the expectations they have for the organisation’s utilisation of AI.
- Are we training the right people?
As new technologies like AI become more prevalent and drive changes in organisational processes and offerings, the significance of training within companies is growing. However, training efforts are often misdirected, with IT professionals being forced into business roles despite their limited understanding of the business domain, as their training primarily focuses on software. To ensure successful implementation of AI, a more effective approach is for companies to prioritise training business leaders and domain experts in IT skills and knowledge, enabling them to harness the potential of AI technologies in their respective domains.
- How are you ensuring that teams are keeping up with AI/ML developments?
As robotics and AI technology contribute to job displacement, the conversation surrounding companies’ responsibilities towards employment and training is gaining momentum. Workers affected by these changes require training for future job opportunities, yet government programs often have limited availability and effectiveness. Consequently, companies, which will inevitably need talent equipped with the latest technological skills, bear the responsibility of retraining displaced workers. When implementing robotics, for example, companies must consider the long-term impact on their workforce and anticipate the skills they will require in the future. Incorporating these factors into the company’s social agreement with employees becomes crucial.
- Where can we redeploy new tech and retrained talent?
The introduction of new technologies does not necessarily lead to unemployment but rather necessitates a two-generation training approach. By automating repetitive tasks, robots enhance productivity and create opportunities for retrained employees to engage in more knowledge-based work. This organisational shift prompts companies to think innovatively about strategic capitalisation and identify areas where the retrained talent can generate a competitive advantage.
Another key question is how organisations can use AI to manage their workforces and create jobs rather than replace them. For example, The Mizuho Financial Group in Japan plans to replace about a third of its workforce with AI by 2027, raising concerns about potential bias in credit risk assessments. It is crucial for executives to explain the benefits of AI to employees, emphasising its role in augmenting jobs rather than replacing them.
By eliminating repetitive tasks and introducing new ones that require human judgment and expertise, AI can enhance efficiency and decision-making, such as in fraud detection. Companies that solely view AI as a cost-cutting opportunity risk deploying machine learning (ML) in inappropriate areas, leading to compromised outcomes and job cuts instead of role upgrades. Boards should seek a comprehensive understanding of management’s perspective on workforce shifts, training strategies for enhancing competitiveness, and social measures to support affected individuals.
Proactive discussions about the future of the workforce, including qualifications for entry-level jobs, part-time employment models, access to freelancers and experts, and the involvement of existing employees in AI system training, are necessary. Prioritising good employment practices and securing a promising future for current and future employees is vital for business sustainability and should not be overlooked by the board.
- What biases could be present in data collection?
Implicit biases exist in the values that shape the datasets used to train computer systems. These biases can lead to problematic outcomes, such as ML applications favoring male candidates for job positions or associating certain professions or behaviors with specific genders or races.
Examples include the association of dark-skinned faces with gorillas in photo recognition systems or racially-biased algorithms used in criminal justice sentencing. A lack of diversity in the teams involved in training AI models and building datasets contributes to these biases. To mitigate bias, diverse coding teams should be involved in AI development, ensuring representation of all kinds of people and perspectives. Boards should prioritise questioning the diversity of coding teams to address these issues.
Companies operating in the attention economy — like Google, Facebook, and Twitter — often place the most value on generating attention and maximising user engagement. However, this focus on monetisation potential can result in biased data sets and algorithms, disregarding context, nuances, and the needs of niche users. Boards overseeing such businesses should not shy away from asking difficult questions, as the monetisation model may have prioritised profitability over privacy, fairness, and personal preferences of consumers.
- How can we ensure data transparency and avoid black boxes?
The use of “deep learning” models raises concerns regarding product liability, rights, liberty, and governance. The inability to understand the reasoning behind the weights assigned by neural nets poses risks in fields like healthcare, finance, law enforcement, and education. The AI Now Institute suggests eliminating unvalidated and pre-trained black box models in core public agencies to mitigate this.
Efforts are being made to address the issue of black box models, such as the development of neural network architectures that highlight influential areas in videos or techniques to explain conclusions drawn from written data. DARPA is funding research projects in Explainable AI (XAI), which will likely be a significant area of future research. However, questions remain about whether companies with large datasets and AI expertise will have the most benefit, potentially maintaining their dominance in AI markets.
Forward-thinking boards can inquire about the application of deep learning models in product design, vendor introductions, and efforts to understand how these models reach their conclusions.
- What kind of threats are we creating at home?
As we develop smart cities and interconnected infrastructures, we must recognise the growing risks within our own environments. One particular area of concern is the vulnerability of solar energy panels and wind generators to cyber-attacks, which could potentially disrupt power grids and compromise our power supply.
It is crucial to identify the sources of these threats, determine the potential targets, and understand the motivations behind the hackers. When assessing the implementation of interconnected technology within your company, it is vital to consider all potential risk vectors and take appropriate measures to mitigate them.
- How will changing standards affect the organisation’s opportunities in different markets?
American companies have for a long time held the advantage of establishing global technology standards, enabling them to define norms for various technologies ranging from USB ports to more advanced systems, which were subsequently adopted worldwide. This gave them a significant competitive edge. However, the dynamics have now shifted.
China has embarked on a substantial national investment, dedicating billions to shape the standards for all technology and AI by 2030. At both the local industry and enterprise levels, China is setting the stage for competitiveness and redefining work practices. Additionally, it is establishing standards within critical sectors such as infrastructure, energy, utilities, and transportation. These developments have implications for how U.S. companies can introduce their products to international markets.
- How vulnerable is the organisation, what is being done about it?
The use of AI in cybersecurity introduces challenges due to the scalability of compromised code and the potential for bugs, biases, and malware. Criminal groups also exploit AI for malicious purposes. Ensuring cybersecurity requires understanding hacker motives and techniques. ML systems have been developed to generate undetectable phishing URLs, highlighting the need for effective defense mechanisms.
The proliferation of ML solutions carries risks of its own though, such as incomplete training data or compromised algorithms, emphasising the importance of diversifying algorithms. Adversarial AI is used to hack AI systems, as seen in medical image analysis, where imperceptible alterations can manipulate results. Corporate boards play a crucial role in overseeing cybersecurity risk, relying on CISOs and networking with other companies to address adversarial attacks.
- Is the organisation complying with evolving AI regulations?
Reacting excessively to accidents involving autonomous vehicles — like the case of the Uber vehicle that caused a pedestrian’s death — can lead to more problems. It is important to acknowledge that accidents are an inherent risk in technological progress. Regulatory compliance and transparency are crucial in the evolution of AI technology.
While we may gain insights into how deep learning models make decisions, ethical dilemmas like the trolley problem may never be fully resolved. However, designing AI models without considering ethics and governance will result in poor ethics and governance, and carries significant reputational risks.
- What relevant global research can the organisation exploit?
Is the board equipped with effective environmental scanning processes to stay updated on the latest advancements in AI? How does the board and management intend to keep pace with AI developments, particularly those happening internationally? Possible approaches could involve presentations by renowned AI experts during board meetings, visits to leading technology companies such as those in Silicon Valley, or access to commissioned or up-to-date research reports.