In the modern world, digital computing systems and technologies are used everywhere to support our society. In this episode, we are joined by ACD/Labs Vice President of Innovation and Informatics Strategy, Andrew Anderson, and Strategic Partnerships Director, Graham McGibbon to discuss the role of digitalization and digital transformation in the context of pharmaceutical research and development.

Andrew and Graham share their thoughts on what digital transformation looks like in business and in the lab, highlighting the importance of digitalizing data to enhance productivity and quality across the board.

Read the full transcript

Barbora Townsend  00:17

Digitalization, digital transformation and even machine learning and AI are topics that are frequently discussed by research scientists. But what do these terms actually mean in the context of pharmaceutical research, and how can you, as researchers, support successful digital transformation?

Baljit Bains  00:32

You may remember that back in February of this year, we were joined by MilliporeSigma scientists to discuss their new chemistry software. Today, we are going to delve deeper into the digitalization in the pharma industry and how it is adapting to the ever-changing digital landscape.

Barbora Townsend  00:47

Hi, I’m Bara.

Baljit Bains  00:49

And I’m Bally. We are the hosts of The Analytical Wavelength, a podcast about chemistry and analytical data brought to you by ACD/Labs.

Barbora Townsend  00:58

In this two part series, we are joined by our Vice President of Innovation and Informatics Strategy, Andrew Anderson, and our Strategic Partnership Director Graham McGibbon.

Baljit Bains  01:08

Andrew and Graham are going to share their thoughts on digitalization, its importance in pharmaceutical R and D, how it fits in with the use of AI, and we’ll share some words of wisdom for anyone looking to advance their research through digital transformation.

Sarah Srokosz  01:20

Hello, Andrew and Graham. Welcome back to the podcast. As you both know, we usually start our conversations asking our guests favorite chemicals. Last time you were on Andrew, you said beta alanine. And Graham, you said octofluorocubane. Do you have another new molecule that is up there on your list that you would like to share with us? Maybe we’ll go to Andrew. You first?

Andrew Anderson  01:49

Oh, I want to go to Graham because he has such a thought on the subject.

Graham McGibbon  01:54

Yeah, thanks, Sarah and mine comes out of a meeting that we had in our corporate office, which we don’t do all that often anymore. But it was a really nice afternoon organized by some colleagues, Ljiljana, Arvin and Irina, who aren’t on this call. And our art director, Robert mentioned during it that he liked the durian fruit, and it’s rather infamous for its unpleasant odor. So I did a bit of research while we were talking and afterwards. And I found out, with a nod to Sam Lemonnik of Forbes, a study that was performed by some authors in 2017. That’s Li, Schieberle, and Steinhaus in the Journal of Agriculture and Food Chemistry that determine not one, but two. So I’m gonna name two chemicals together were indistinguishable from the durian odor reference, and neither one of them smells like durian on its own, which I think is the reason why chemicals are interesting. One apparently smells oniony, and the other kind of fruit like and the two chemicals are ethyl (2S)-2-methylbutanoate and 1-(ethylsulfanyl)ethane. So those are my chemicals for today, and the reason behind it.

Andrew Anderson  03:11

That’s so awesome, Graham, because the irony is and Sarah, we didn’t prep for this, okay. The authors of that study that Graham references for the durian fruit extract was characterized at the Technical University of Munich in my prior life in the food and beverage industry. They were one of our partners, and I’ve been to that lab. And you can guess, maybe here’s where, where the two worlds collide. That lab is full of analytical equipment, and the methods by which they determine these structures is gas chromatography with mass spectrometry. And the collection of MS equipment at the Technical University of Munich is world-renowned and I’m sure they can talk about their analytical data challenges. So a nice transition to the analytical wavelength podcast. No doubt that’s a really cool, small world effect, right, Graham?

Graham McGibbon  04:16

Absolutely

Sarah Srokosz  04:17

Absolutely, I agree as well. It really does seem like the world of chemistry feels a little small sometimes, even though there’s lots of people all over the world doing this work, absolutely. But as you mentioned, Andrew, yeah, that is a great segue into the analytical wavelength podcast, and today we’re talking about digitalization and digital transformation. And so let’s start with just some basic definitions. What does digitalization mean in the context of pharmaceutical research?

Graham McGibbon  04:51

Yeah that’s super and I think it’s important to go even one step back and say that first, to digitize means to encode using binary symbols, zeros and ones. Everybody knows that. Those are digital representations then, of what? Of observations, of measurement, data and information? Things that are derived from it, and digital representations are fundamentally compatible with modern electronic digital computing systems that are used everywhere, supporting our society, not just the lab. And I’d go on further than to say that Gartner defines digitalization as the use of digital technologies to change a business model and provide new revenue and value-producing opportunities; the process of moving to a digital business. So it’s a process, and it involves digital technologies, and it’s important that those are value-producing. That’s not always pure revenue, but value-producing to society. And so that’s we don’t just do it as a hobby, I would say. And that’s important for our business. That’s part of the reason to recognize that.

Andrew Anderson  05:59

Yeah I’ll add a bit more the concept of digitalization as it pertains to like laboratory experimentation, right? There’s data that gets generated across the, I like to use the term DMTA – design, make, test, analyze. The cycle of DMTA data gets generated at every point right and to digitalize the… call it representation of that cycle requires a considerable amount of attention and focus on the types of data that get generated. But just as important is, how do you establish relationships between that orthogonal set of data, from design to make, right, to test and then ultimately to analyze. The A in DMTA requires a linking in the relational database world, we call that a primary key foreign key relationship, right? And so being able to digitalize from the experiment the physical, first the conceptual, to the physical, and then ultimately, to the digital, requires what you know in computer programming terms, we call serialization, right taking data from one format and transforming it to a different format that’s more digestible by the mechanism you’re going to use to transmit. And so I always like to add the serialization, deserialization bit on any sort of digital or digitalization type topic or discussion, right? The ability to transform formats, from data, from one format to another is really important to establish that, in some cases, orthogonal relationship. So I hope that’s helpful and rounding out the depiction of applications in the scientific world.

Sarah Srokosz  08:05

Yeah, so, and I think you brought up the term transformation there, which is the next term that I wanted to define. So, what is digital transformation? Which is another term that gets used a lot, and how is it distinct from digitalization?

Andrew Anderson  08:25

Yeah I’ll take a stab at at this one, Graham. From from my perspective, the act of transforming processes from physical ones to digital ones, that’s the digital transformation, right? So it’s a process change, and there’s lots of literature and leadership, thought leadership, around transformation in general. Putting the prefix digital in front of it purely implies that folks are performing considerable process analysis and focused on the transformation of activities that are formerly physical activities, and trying to digitalize those processes. In other words, replace physical processes with digital ones where possible. And the classic example we’ve all heard about for years is to replace paper-based activities or processes with digital ones. And so one of the first, historically, in the industry, in, like, sort of the science-driven industries we all work in or work around, lots of different like, paper-based exchange processes, right? Classic example is lot release, right? If you work in any sort of like manufacturing industry, the ability to assure quality is an important one. Part of that quality assurance process leverages experimental data acquired, you know, in laboratories, for example, quality control laboratories, the results of those quality experiments are summarized in reports. So data that’s acquired is abstracted and represented in reports, and often those reports are, you know, human-readable representations and summaries of experiments that are represented physically, you know, on paper.

Andrew Anderson  10:46

The first digital transformation exercise was to represent those reports electronically, right, as opposed to printed out on paper and reviewed, you know, in one’s hand, so to speak. Reducing the physical volume of things, you know, cutting down on the amount of paper that is required for certain activities, digital transformation that is extended to replace the paper-driven activities, or the document-driven activities, to have, call them portals, or interfaces, where the data stream from the lab to some sort of computer interface, thus avoiding that abstraction step. Abstraction often means reduction, right, some way to sort of squeeze in the vast collection of data that’s generated through these quality experiments into some human readable, human interpretable form. So extending the analogy beyond paper-based or document-based to data-based decision support type system is like the next step in the digital transformation journey.

Andrew Anderson  12:08

Most recently, digital transformation has been extended to include not just human-readable data interfaces, but machine-readable ones. So presenting, let’s continue down the quality control/quality assurance paradigm, being able to present data, not just to human audiences, but also machine audiences. So not just, you know, human interfaces for decision support, but also machine ones like machine learning applications and artificial intelligence. So assuring that the data or the digital transformation paradigm is extended into machine interpretability is the last mile of that journey, so to speak. Hope that makes sense.

Sarah Srokosz  12:55

I think that covered a lot. Graham, do you have anything to add there?

Graham McGibbon  13:00

I simply echo what Andrew was saying, that the human journey is very much one of manual activities. We use materials, we change materials, we make materials. We have mechanical activities, but more and more in the modern world, we’re using electronic systems and our information, the conveyance of information. And Andrew, you pointed out representation. That’s key in the digital transformation. If you store information in systems, instead of transcribing paper and paper onto glass, so to say, then, then machines can use it. And so we go from simple tool use to cycles where machines and humans are interacting. And as you pointed out, we have better data integrity as data flows in that process. That’s really the digital transformation moves from humans conducting every step and being necessary for every step to proceed in a process, to one where a large amount of the process can happen automatically with the flow of data and materials. Like we can we can build machines to move materials, but we also, at the same time our building systems, that’s the digital transformation to allow that the data, the information and the results, to flow as we do those experiments. And I think that that, as you pointed out, the last mile, is really important in that. But that holistic data representation is the key to why people are seeing digital transformation as so important to their activities, right?

Sarah Srokosz  14:49

Yeah, I really liked Andrew’s point there about abstraction and that, you know, just because it’s not on paper, and, you know, maybe it is on your computer, if it’s in a document that, you know you’ve had to compile, you know, generally, you are reducing some of the information. And then, you know, as necessary as humans are to some part of these processes, you know, we are subject to human error and things like that, and biases and so, yeah, that’s another important point that I think is is often overlooked is we’re not just talking about having it in a digital format, like a document like this is kind of the next level of that.

Andrew Anderson  15:29

Yeah. Not to get too like granular on the subject, but you have data that is represented digitally, as we talked about for the digitalization, part of digital transformation. So data exists as information that is subject to interpretation, right? And so you want to a mechanism, a structure, a data structure, by which you represent either human interpretation or a rule-based interpretation. Oftentimes, when you want to automate interpretation or automate a process in general, you need ways to structure the interpretation of data that’s being being generated. Furthermore, from the interpretive data, there’s a layer of analysis, right? So once data interpreted, here’s a great analogy for considering the title of our podcast, right? If you have, uh, compositional data, right, like chromatography data, and the information you get is, you know, ones and zeros, as Graham alluded to or data in in an array like an XY plot, you have a table of sparse data, of which certain features can be observed and based on rules you can extract from, you know, data streaming or represented as an XY plot to a peak table, right? And your peak definition usually follows, your peaks adhere to certain feature rules. So then you can transform or digitally represent, not just the data itself, but a collection of features, in this case, peaks, and then the attributes of those features can be further interpreted or analyzed to reveal things like composition, right? So first thing you have to think about from a chromatography perspective, is if my data have peaks, and I know something about the provenance of the sample, I may infer, and the method… I may infer that peaks with certain attributes may have identity properties, like, if a peak exhibits a certain retention time, and I know that one of my standards also exhibits the same retention time, I may assign a peak in, you know, in my sample data set with a certain identity based on that retention time attribute or property. So being able to then structure the resulting data with call them levels or properties using an ontology or hierarchy is some of the things that folks in in the software world focused on scientific data think about from a specification, like a data specification point of view.

Andrew Anderson  18:46

So being able to carry the digital transformation like philosophy and then apply it to some of the scientific work that goes on, thinking carefully about data structure is a really important aspect. And oftentimes what we, Graham and I, in particular, work with our clients, is the first thing we do is think about the why of a digital transformation exercise. What do you want to get out of, you know, this massive effort you’ll undertake, right? You know, it’s one thing, sorry, to get philosophical, Sarah, but it’s one thing to to think about at like an executive level, right? If you’re a CEO of a large company with lots of people, you’re responsible for setting the direction of a large organization right and and they’re often influenced by recommendations to undergo digital transformation exercises. One of the things that that we do when a leader in the the scientific world is charged with like supporting or sponsoring a digital transformation exercise. We try to think about goals, but then translating goals to a practical plan, like before, we start executing, right? We try, we try to establish a philosophy, dare I say, a controlled vocabulary of terms that can then be applied to some of the activities you’re going to undertake during that digital transformation exercise. And what that results in is a, let’s call it a strategy at the fundamental level that literally transforms, you know, some of the things you do in the lab that you may not even think about, you know, on the on the day to day, you transform that to a set of like a digital structure. And that may allow us to have a conversation about digital twins. So, but I’ll leave that for for another question or topic. I think it’s an important subject, but it really bridges between, you know, the motivations of call them, project sponsors at high levels, and the day to day activities that that folks undertake in the lab to transform their activities to digitally enabled workflows and the like. Anyhow. Hope that makes sense,

Sarah Srokosz  21:28

Yeah, yeah. And actually, you had it right on the nose. So our last and final definition that I wanted to ask the two of you for is the term digital twin. Graham, maybe you want to take that first.

Graham McGibbon  21:41

Sure Andrew pointed out attributes earlier on, and a vocabulary, if not an ontology, which is a set of words, obviously, but the words have meanings. And there there’s no redundancy in the terms that you’re using, and things have relationships to each other in terms of their ability to describe things. So a digital twin is, is a digital representation. It’s a way to describe data, but also product or process that the data may pertain to. And Andrew, I was say, catch my words there. It’s because I was thinking of the design, make, test, analyze cycle that you had referred to, right? So there’s not just the data that we need to render digitally as a twin, but also those activities. The key attributes of the instruments, equipment design, that’s in those activities, not just the the ingredients or substances or materials that have been used in them. So it’s quite a bit broader, a digital twin than maybe what people used to think about in terms of, I have some material, and I have some data for that material, and it’s through that holistic collection that you end up with what I think is a digital twin. And a digital twin, the key thing to me is that it would be information that would allow you to do something predictive and useful and have insights the way that that you would with any other actual; a real as opposed to a virtual set of information.

Andrew Anderson  23:22

Yeah, you nailed it Graham. It’s like the juxtaposition of a fully granular depiction of a thing, and that thing’s simulated motion in time, right? It’s simulated behavior in time, right? So if you think about, in this case, application of the digital twin to a system right, the system can be a physical sample, the so called physical twin counterpart to the digital twin. So you can have in, let’s apply it in this like scientific context, of you can have a sample right of a substance, and the digital twin of the sample is likely going to be its exhibited properties, right? It’s exhibited attributes. If you then extend the digital twin to include the provenance of the sample, usually in our world, that is, those samples come from experiments. And again, those experiments have have attributes. So the ability to to fully represent the digital twin as digital thing and simulate its behavior, right? Let’s talk about application. I think it’s important to to define digital twin, but also describe the application of the digital twin too. So in our case, the world Graham and I live in, so to speak, is folks want to utilize digital twins for cause and effect. So let’s talk about a real classic one, a so called process control justification study, right? If I’m making a drug, substance could be of a biologic nature, could be small molecule, could be somewhere in between. You have an experiment that produces a substance that will turn into a drug substance, you know, physical form, modification and control and the like. And so the digital twin of that material, you can extend to the experiment that you used to make that material and simulate the effect of varying parameters to determine how those parameters affect composition, right and composition impacts quality and risk. And so being able to simulate the effect of varying parameters is like super important to assure high quality of substances that are being produced and ultimately consumed, whether it’s drugs or packaged goods or food or beverage, right? You want to make sure that you have control over your process so that you assure control over composition. Okay, so you can’t, well, you could, it’d be expensive and time consuming. You could fully, quote, unquote, fully factorialize all permutations and combination of parameter ranges and then produce materials with that full factorial and then test composition on that full factorial design. The use, hopefully, the use of digital twins in that context is obvious, right? So once you have a viable simulative environment, you can look at cause and effect in a digital way and avoid making and testing all of those. You know that full factorial design, you know, full factory of all combination and permutation of of, you know, process parameters and resultant…

Graham McGibbon  27:19

Which is expensive in materials, if not timing, yeah, yeah.

Andrew Anderson  27:24

One would think, especially, you know, specialty chemicals, like, like pharmaceutical drug substances, no doubt. So yeah, that, for me, is, is like the the clear application of the digital twin, and in this world where, where you can cut down the number of physical entities that you may can simulate that cause and effect. Hope that makes sense. Sarah.

Sarah Srokosz  27:49

Yeah, yeah, it does to me.

Barbora Townsend  27:50

Thank you both for laying the groundwork for defining what digitalization, digital transformation and the digital twin means in the context of pharma research and how AI supports the innovation processes in generating new drug candidates.

Baljit Bains  28:03

Stay tuned for our next episode to find out what Graham and Andrew think about the current state of digitalization in the pharmaceutical industry and their advice for companies, either at the start of or in the midst of their digital transformation journey.

Barbora Townsend  28:15

That’s all for today. Thanks, as always, for spending time with us, and don’t forget to subscribe through your favorite podcast app. If you’ve been enjoying the show, we would appreciate it if you’d recommend it to a colleague or share it on social media.

Sarah Srokosz  28:29

The analytical wavelength is brought to you by ACD/Labs. We create software to help scientists make the most of their analytical data by predicting molecular properties and by organizing and analyzing their experimental results. To learn more, please visit us at www.acdlabs.com.


Enjoying the show?

Suscribe to the podcast using your favourite service.