Gravitas Recruitment Group and Peak, joined forces recently to co-host a panel conversation called ‘AI for Good’ as part of the AI, Data & Machine Learning Northwest Networking series.
The event featured a number of keynote speakers and panellists from the not-for-profit, private, and public sectors, sharing their insights and experiences with AI.
-
Matt Squire, CTO at Fuzzy Labs
-
Dr Ali Bukar, Computer Vision Principal at Ofcom
-
Holly Traynor, Insight Data Science Team Lead at Peak
-
James Crawford, AI Lead at Touchpoint Change Consulting
-
Juhi Gupta, Lecturer / Researcher Health Data Science at the University of Manchester
In this blog post, I summarise some of the key points below, raised by the event’s speakers, ensuring that AI continues to be used for good, as much as possible.
-
AI, its capabilities
-
How it is being currently used to benefit society
-
How you might adopt AI into your own organisation
-
AI pitfalls and ethical considerations
A brief AI history
With the launch of ChatGPT by OpenAI last year and the subsequent buzz around AI, you could be forgiven for thinking that it is a new innovation. However, AI in basic form, has actually been around for a long time. A quick look on Wikipedia demonstrates that it first came to light in the 1950s, but that it wasn’t until 2012, when deep learning surpassed previous AI techniques, that it really started to gain traction.
A 2017 survey found that whilst 34% of people believed they had used AI, 84% had actually done so. In fact, most of us have been utilising it in some form or another for a while; in our smartphones, on social media, the internet and online shopping. From Siri, to Alexa, to Google search, face ID, chatbots and algorithms serving us content based on our individual preferences, it has become part of our everyday personal lives without us even realising it.
It has also been used across many industries to address various challenges, and as it evolves is presenting more and more opportunities to help with innovation and in solving business and societal problems. Our guests and panellists shared some real-world examples of how AI has been used for good within the Not-for-Profit and Health sectors.
Using AI for good – real world examples
Using AI to distribute food
Holly Traynor from Peak detailed how AI helps businesses to optimise their inventories, pricing and customer personalisation. For food redistribution charity Fareshare Greater Manchester, the ability to efficiently forecast supply and demand means less food is wasted and more ends up with people in the community that need it. FareShare’s operations have a number of unknown variables, including how much food and what type of food will arrive and what the needs are of the community on any given day. By harnessing the power of Peak’s AI applications FareShare is able to determine the most efficient means of distributing donations and generate personalised reports at the click of a button to show its donors and suppliers how their donations have been used.
Harnessing data to improve health outcomes
AI is also being widely applied in the medical industry, from robotics in surgical settings to predictive software. It is being used to improve patient care, for example in early diagnosis and disease tracking and prevention. One of our keynote speakers, Juhi Gupta, from the University of Manchester, shared some examples of how data science approaches are being used in medical research to harness big data to improve health outcomes. One of these is the Greater Manchester Care Record project, which collated regional digital health care record of approximately 2.8m citizens. This has played a huge role in understanding and improving direct care particularly over the COVID-19 pandemic by providing data for public health research. The University of Manchester has now received funding for developing unique research teams to help tackle complex health and societal problems by partnering researchers with AI.
Generative AI to provide specialist data
Products are also being built to do good within specialist areas. Matt Squire from Fuzzy Labs introduced us to MINDGPT an opensource product that his company has developed using a Large Language Model (LLM) with an ability to help answer users’ specific mental health questions. While not attempting to replace a qualified counsellor, the AI has been trained to answer questions about mental health using data from the Mind and NHS websites.
Advice on how to adopt AI
As highlighted by these examples, there are many opportunities for business, whether in the Not-for-Profit, Public or Private sectors. However, with the pace of innovation, and for those companies with limited resources or finances, it can be difficult to know where to start. James Crawford from Touchpoint Change Consulting’s advice to business leaders is to adopt a simple 3 step process towards adopting AI.
-
Inform yourself about what’s going on.
Assess what opportunities are out there, but also be aware that AI also comes with risks. You need to educate yourself on both of these. Really understand what’s out there in terms of technology, functionality, and capabilities.
-
Imagine what it can do for you.
Have some visibility on how AI can help with your specific business challenges. Both for the short-term for low cost, but also long-term, for example, how it could improve your customer services, or radically transform them.
-
Initiate the first steps towards adoption.
Start experimenting with what it can do and then evolve from there. Build your capability, confidence and business benefits.
How to ensure AI is used for good
So far within this article, we have discussed the multiple opportunities that exist for using AI for good, yet all of our guests recognised that along with opportunities come risks. They highlighted the responsibility that comes with AI adoption in considering the limitations, threats and ethical questions that arise.
Fears about the potential trajectory of the evolution of AI are well documented, with renowned figures such as Elon Musk even calling for a halt to development due to future risks to society. And, although AI is still limited in terms of independent intelligence, our panellists discussed the propensity of generative AI to hallucinate and to lean toward bias, alongside the potential of future AI to deceive. Dr Ali Bukar, Principal at Ofcom Trust and Safety Technology, summarised some key points in mitigating some of the risks.
-
Transparency and accountability are essential – ensure that users understand how AI systems work, and hold them responsible for their actions e.g. outlining regulatory guidance.
-
Releasing and deploying AI systems in a phased or incremental manner, with careful consideration of the associated risks e.g. by implementing robust vetting systems to grant access to powerful AI models.
-
How can we ensure AI systems protect data privacy e.g. by promoting the integration of privacy enhancing technologies into AI systems.
-
How to address the carbon footprint of AI technologies e.g. by introducing regulations and more collaboration and education of what developers and organisations can do.
Other fears centre around AI taking human jobs. According to James Crawford, in the short to medium term we are unlikely to see huge numbers of unemployed as a result of AI, although certain trades such as taxi and lorry drivers are likely to see the impact soon, with the advent of self-driving vehicles. We will need to support the trades affected with opportunities to retrain and move into areas of the economy with skills shortages (of which there are currently many).
Our discussions acknowledged that as AI develops it will likely lead to changes in society. Further automation will eventually result in decreased human input, which might enable individuals to work less and spend more time on purposeful activities.
Policies will need to be put into place to ensure that if people with less work are still able to live, which could possibly lead to the introduction of more socialist policies such as a universal basic income. This and the likelihood of huge profits being made by big tech companies or AI being used for bad, will require new regulations and tight restrictions by politicians.
It’s clear from the shared experiences of our event guests that there are many great examples of AI being used for good and that, as it evolves, it presents us with unlimited opportunities. Forward-thinking businesses should be putting steps into place to understand it’s possibilities and how to harness them positively.
It is also crucial for us all to be mindful of risks, limitations and to use it and teach it responsibly. By being aware of potential impacts, we have the best chance of utilising its potential for overcoming many challenges within society.
Are you looking to hire tech or AI professionals into your business?
Contact our specialist team for a consultation in how we can help tailor a hiring plan that future-proofs your business.
About the author
Charlie Moore, Principal Recruitment Consultant, joined Gravitas in 2018 and after a short stint away he re-joined in 2022 to lead our Data Science and Machine Learning market. He is a senior member of our Data and Client Solution Teams covering the recruitment process from start to finish. He works with clients to source Data Science and Machine Learning experts, as well as helping to hire senior, Director and C-Suite specialists across our established client base. Charlie also has a passion for working with universities to inspire the next generation of technology specialists.