For those working in STEM sectors, knowing how to safely and sustainably utilise AI is a skill in itself.
Artificial intelligence (AI) in the right hands can empower the user to be more creative, timely and exact in their work. But advanced technologies often end up being used unknowingly or deliberately incorrectly, for reasons that vary from a simple lack of training to purposeful, harmful behaviours.
Governments worldwide are grappling with their individual AI policies. The US, for example, has begun effectively dismantling many of the safeguards put in place by former president Joe Biden.
In the UK, the argument over the fair and safe use of AI in relation to creative content rages on, with more than 1,000 artists including Kate Bush and Annie Lennox having contributed to a silent album in protest of proposed changes to UK copyright law that would allow AI developers to train their models using copyrighted material unless an ‘opt out’ was selected.
It has come to a point where the individual needs to ensure that they have the skills to engage with AI in a way that is ethical, without compromising its effectiveness. But how can that be achieved?
Machine learning bias
Machine learning (ML) models, for all their advantages, are often undoubtedly biased. Regardless of whomever is the brain behind the training, it is virtually impossible for the creator not to influence the data informing the technology. That is not to say it is wilfully misleading, it just stands to reason that data is always going to be impacted by human involvement.
Professionals planning on a career in a STEM field should ensure that they have a deep understanding of how ML models, data and algorithms are subject to prejudice and bias. They should also know the techniques to employ in order to improve neutrality.
Techniques to consider include feature blinding, which removes identifying attributes; monotonic selective risk, which improves accuracy for underrepresented groups; and adversarial classification, which enables users to identify weak spots in a model in order to prevent future weaknesses.
Programming and software engineering
It is always better to approach a problem with the mindset that you will eventually solve it yourself. Basically, you should strive to be the solution you seek. STEM professionals who are concerned about the ethical nature of the systems they are using in a work capacity could benefit from knowing how to build, use and maintain those same systems.
By becoming skilled in a range of programming languages, such as Python, Java, SQL and R, experts can embed ethically sound principals into their code when building AI models. By knowing how to engineer AI systems that automatically include safeguards, professionals can work knowing that they are doing everything in their power to be a safe, responsible and transparent AI user.
Systems integration
Companies all over the world are using AI technologies to enhance the workforce and improve upon what their organisations can offer. For the most part, the vast majority of companies will have existed prior to advancements in AI, for that reason it is crucial that professionals, particularly those in STEM who frequently use AI, know how to seamlessly integrate new and old tech.
Professionals should ensure that they fully research any AI tools they intend to integrate, in order to prove they are a good fit. This might be ascertained via data compatibility, cloud capabilities, the drain on resources and whether or not it is meeting the specific needs of the business.
If it becomes apparent that older systems are no longer compatible with new and emerging technologies such as AI, it might be time to let go of what isn’t working and upgrade digital infrastructure. Hence why having STEM professionals skilled in systems integration is a plus for any organisation.
Because AI is typically used by companies to scale and grow, experts deploying new tech should ensure that the systems they are introducing meet current ethical standards. This can be accomplished by giving consideration to where the technology has come from, how it was informed and the impact it might have on society and sustainability.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.