Virgo Associates

Balancing AI's promise and pitfalls

Talk to an expert

Artificial intelligence (AI) continues to bring benefits across many industries, including healthcare diagnostics and consumer technology. However, as its applications expand, so do concerns about its accuracy and potential for misuse. Two recent examples—the use of AI in detecting ovarian cancer and its controversial implementation in summarising news—illustrate both the transformative potential and the risks of AI.


AI in early cancer detection


Ovarian cancer is notoriously difficult to detect in its early stages. Early intervention is critical for improving survival rates. However, current methods rarely identify the disease before it spreads.


A breakthrough by Dr Daniel Heller and his team at Memorial Sloan Kettering Cancer Center offers hope. They have developed an AI-powered blood test that uses nanotube technology—tiny tubes of carbon that react to molecules in the blood. These nanotubes emit fluorescent light based on what binds to them, creating a molecular "fingerprint."


The challenge lies in interpreting this data. While the molecular patterns are too subtle for humans to discern, machine-learning algorithms excel at recognizing such complexities. By training AI systems with blood samples from patients with and without ovarian cancer, the team can identify the disease far earlier than conventional methods.


This innovation could revolutionise diagnostics, not just for ovarian cancer but for other diseases, including infections like pneumonia. However, as with any AI system, its effectiveness depends on the quality of the data and algorithms used, which brings us to a story that highlights the risks involved with AI.


The risks of misapplied AI 


Apple’s AI-driven news summarising feature on its latest iPhones has drawn criticism for generating inaccurate headlines. This feature is designed to help reduce the number of notifications smartphone users receive, however the BBC said that “these AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content.”


The BBC, as well as the journalism body Reporters Without Borders, have called for Apple to withdraw the feature, citing the dangers of misinformation.


Apple has now announced that a software update in the coming weeks will make it clearer that the summary has been AI-generated, but critics argue this is insufficient. They argue that the responsibility to verify accuracy will remain with users, which complicates their being able to get accurate information and lessens trust in the news.


Lessons for businesses


These two contrasting examples of AI in use offer some valuable lessons to businesses that are looking to integrate AI.


Firstly, ensuring accuracy is paramount. This is especially clear in a high stake healthcare application where a false positive or negative in diagnostics can have life-altering consequences. However, in any application the use of AI needs to be subject to robust testing and validation checks.


There is a need to communicate clearly about your use of AI as miscommunication about AI’s role and limitations can damage trust. Apple’s failure to initially acknowledge the AI-generated nature of its summaries contributed to public confusion and backlash.


AI systems have the potential to disseminate false information. Therefore, they need to be designed with safeguards and checks to prevent this from happening.


Balancing promise with caution


AI has the potential to bring many benefits, however, as the above two examples illustrate, this technology is not without risks.


In the rush to innovate, the lesson is clear. AI is best approached with caution, ensuring its use is rigorously tested and clearly communicated so you can fully harness its benefits without any downsides.


See: https://www.bbc.co.uk/news/articles/cq8v1ww51vno; https://www.bbc.co.uk/news/articles/cge93de21n0o

January 15, 2025
New digital markets competition regime now in force

Last week, the Competition and Markets Authority (CMA) set out its initial plans for the new digital markets competition regime. The regime is designed to support the UK’s tech sector and has its legal footing in the Digital Markets, Competition and Consumers Act. The Act received royal assent in May 2024 but came into force on 1 January 2025.

Read article
January 13, 2025
Tax return filing deadline looms

With the 31 January Self-Assessment tax return filing deadline fast approaching, HM Revenue and Customs issued a press release last week noting that 5.4 million taxpayers are yet to complete their return.

Read article