Despite only recently being released into the public forum, Microsoft’s news-breaking AI-powered chatbot, ChatGPT, is already differentiating into areas of expertise with a new version trained for the life sciences and biomedical industries, BioGPT, having just been launched.
Why it’s Notable:
- Like its predecessor, BioGPT is the first, AI-powered chatbot that has been trained on biomedical text data such as scientific publications, clinical notes and drug labels. Microsoft has claimed that the system can understand crucial nuances in syntax specific to scientific oriented text such as differentiating between drug, gene and protein names.
- Beyond helping university students to complete assignments and write research reviews (though we would caution on this, as described below), BioGPT has great potential in areas such as drug discovery, precision medicine and clinical trial design. This is due to the fact that it can automate the analysis of scientific literature and also draw parallels or identify relationships between entities of interest. It can also help predict drug-drug interactions and side effects of drug combinations by drawing inferences from previously published results.
Industry Implications:
- ChatGPT has dominated headlines since it was released and caused quite a stir, raising equal amounts of excitement, trepidation and skepticism. Debate has been widespread with many raising concerns over the accuracy of results and also warning against over-hyping such systems as being more intelligent than they actually are. It will be important going forward to recognise the limitations of these types of systems and to carefully consider where they are best placed to aid in existing processes. This will be especially true when applied to life sciences research and discovery where accuracy is imperative. Careful consideration must be given to underlying bias that may have been present in training data and how this might affect the results generated.
- Further, BioGPT is powered by artificial neural networks, which in simple terms means that it can learn and grow independently. It has been termed as a “black box” technology meaning that the intricacies of how the system learns and grows are not fully transparent to developers. Though not yet near apocalyptic levels, advances in this area have led to enough concern for many prominent industry figures, such as Steve Wozniak and Elon Musk, to sign an open letter urging for a pause in the development of powerful AI systems until their capabilities and risk associated can be fully assessed.