Length 12:06 | Jan 15 2019
DeepMind, an AI lab & complete outsider to the field of molecular biology, beat top pharmaceutical companies with 100K+ employees like Pfizer, Novartis, etc. at predicting protein structures. This is huge!
Length 4:40 | Apr 2019
… a new generation of medicine — made from smaller, more durable proteins known as peptides — is on its way. In a quick, informative talk, molecular engineer and TED Fellow Christopher Bahl explains how he’s using computational design to create powerful peptides.
NATURE pdf: Constrained Peptides’ Time to Shine?
EXCERPT of full article
Octopuses…are short-lived, typically around for just one to two years.
That’s because they’re semelparous, which means they reproduce just once before they die. With female octopuses, once she’s laid her eggs, that’s it.
In fact, the mother even stops feeding – she’ll stay and watch over her eggs until they hatch, slowly starving to death. In captivity, towards the end, sometimes she’ll tear off her own skin, and eat the tips of her own tentacles.
Now, scientists have figured figured out why this grim scenario happens. It has to do with the optic gland between the octopus’s eyes; a gland similar to the pituitary gland in humans.
In 1977, researchers removed this gland and found that the octopus’ mothering instincts disappeared. She abandoned her eggs, started feeding again, and went on to live a much longer life.
The maturation of the reproductive organs appears to be driven by secretions from the optic gland. These same secretions, it seems, inactivate the digestive and salivary glands, which leads to the octopus starving to death.
In new research, neurobiologists from the University of Chicago used genetic sequencing tools to describe the precise molecular signals produced by the optic gland of a female California two-spot octopus (Octopus bimaculoides) after reproducing.
They also described four distinct phases of maternal behaviour that they were able to link to these signals, explaining how the optic gland drives her death.
“We’re bringing cephalopod research into the 21st century, and what better way to do that than have this unveiling of an organ that has historically fascinated cephalopod biologists for a long, long time,” said neurobiologist Z. Yan Wang.
“These behaviours are so distinct and so stereotyped when you actually see them. It’s really exciting because it’s the first time we can pinpoint any molecular mechanism to such dramatic behaviours, which to me is the entire purpose of studying neuroscience.”
In the field of synthetic biology, where engineers seek to “rewire” living organisms and program them with new functions, many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. Here are five companies that are integrating machine learning with synthetic biology to pave the way for better science and better engineering.
LOCKR … entirely conceived by humans and represents one of the few proteins made from scratch in the lab.
In their demonstrative studies, the researchers used LOCKR to trigger cell death, degrade specific proteins, and direct the movement of materials through living cells.
Individual LOCKR proteins can also be connected to form circuits, systems able make changes within the cell in response to internal and external stimuli.
The researchers first tested their tool in yeast, then successfully designed a modified version that works in lab-grown human cells.
The second half of the last century was completely defined by a technological revolution: the software revolution. The ability to program electrons on a material called silicon made possible technologies, companies and industries that were at one point unimaginable to many of us, but which have now fundamentally changed the way the world works. The first half of this century, though, is going to be transformed by a new software revolution: the living software revolution. And this will be powered by the ability to program biochemistry on a material called biology. And doing so will enable us to harness the properties of biology to generate new kinds of therapies, to repair damaged tissue, to reprogram faulty cells or even build programmable operating systems out of biochemistry. If we can realize this — and we do need to realize it — its impact will be so enormous that it will make the first software revolution pale in comparison.
And that’s because living software would transform the entirety of medicine, agriculture and energy, and these are sectors that dwarf those dominated by IT. Imagine programmable plants that fix nitrogen more effectively or resist emerging fungal pathogens, or even programming crops to be perennial rather than annual so you could double your crop yields each year. That would transform agriculture and how we’ll keep our growing and global population fed. Or imagine programmable immunity, designing and harnessing molecular devices that guide your immune system to detect, eradicate or even prevent disease. This would transform medicine and how we’ll keep our growing and aging population healthy.
We already have many of the tools that will make living software a reality. We can precisely edit genes with CRISPR. We can rewrite the genetic code one base at a time. We can even build functioning synthetic circuits out of DNA. But figuring out how and when to wield these tools is still a process of trial and error. It needs deep expertise, years of specialization. And experimental protocols are difficult to discover and all too often, difficult to reproduce. And, you know, we have a tendency in biology to focus a lot on the parts, but we all know that something like flying wouldn’t be understood by only studying feathers. So programming biology is not yet as simple as programming your computer. And then to make matters worse, living systems largely bear no resemblance to the engineered systems that you and I program every day. In contrast to engineered systems, living systems self-generate, they self-organize, they operate at molecular scales. And these molecular-level interactions lead generally to robust macro-scale output. They can even self-repair.
Consider, for example, the humble household plant, like that one sat on your mantelpiece at home that you keep forgetting to water. Every day, despite your neglect, that plant has to wake up and figure out how to allocate its resources. Will it grow, photosynthesize, produce seeds, or flower? And that’s a decision that has to be made at the level of the whole organism. But a plant doesn’t have a brain to figure all of that out. It has to make do with the cells on its leaves. They have to respond to the environment and make the decisions that affect the whole plant. So somehow there must be a program running inside these cells, a program that responds to input signals and cues and shapes what that cell will do. And then those programs must operate in a distributed way across individual cells, so that they can coordinate and that plant can grow and flourish.
If we could understand these biological programs, if we could understand biological computation, it would transform our ability to understand how and why cells do what they do. Because, if we understood these programs, we could debug them when things go wrong. Or we could learn from them how to design the kind of synthetic circuits that truly exploit the computational power of biochemistry.
My passion about this idea led me to a career in research at the interface of maths, computer science and biology. And in my work, I focus on the concept of biology as computation. And that means asking what do cells compute, and how can we uncover these biological programs? And I started to ask these questions together with some brilliant collaborators at Microsoft Research and the University of Cambridge, where together we wanted to understand the biological program running inside a unique type of cell: an embryonic stem cell. These cells are unique because they’re totally naïve. They can become anything they want: a brain cell, a heart cell, a bone cell, a lung cell, any adult cell type. This naïvety, it sets them apart, but it also ignited the imagination of the scientific community, who realized, if we could tap into that potential, we would have a powerful tool for medicine. If we could figure out how these cells make the decision to become one cell type or another, we might be able to harness them to generate cells that we need to repair diseased or damaged tissue. But realizing that vision is not without its challenges, not least because these particular cells, they emerge just six days after conception. And then within a day or so, they’re gone. They have set off down the different paths that form all the structures and organs of your adult body.
But it turns out that cell fates are a lot more plastic than we might have imagined. About 13 years ago, some scientists showed something truly revolutionary. By inserting just a handful of genes into an adult cell, like one of your skin cells, you can transform that cell back to the naïve state. And it’s a process that’s actually known as “reprogramming,” and it allows us to imagine a kind of stem cell utopia, the ability to take a sample of a patient’s own cells, transform them back to the naïve state and use those cells to make whatever that patient might need, whether it’s brain cells or heart cells.
But over the last decade or so, figuring out how to change cell fate, it’s still a process of trial and error. Even in cases where we’ve uncovered successful experimental protocols, they’re still inefficient, and we lack a fundamental understanding of how and why they work. If you figured out how to change a stem cell into a heart cell, that hasn’t got any way of telling you how to change a stem cell into a brain cell. So we wanted to understand the biological program running inside an embryonic stem cell, and understanding the computation performed by a living system starts with asking a devastatingly simple question: What is it that system actually has to do?
Now, computer science actually has a set of strategies for dealing with what it is the software and hardware are meant to do. When you write a program, you code a piece of software, you want that software to run correctly. You want performance, functionality. You want to prevent bugs. They can cost you a lot. So when a developer writes a program, they could write down a set of specifications. These are what your program should do. Maybe it should compare the size of two numbers or order numbers by increasing size. Technology exists that allows us automatically to check whether our specifications are satisfied, whether that program does what it should do. And so our idea was that in the same way, experimental observations, things we measure in the lab, they correspond to specifications of what the biological program should do.
So we just needed to figure out a way to encode this new type of specification. So let’s say you’ve been busy in the lab and you’ve been measuring your genes and you’ve found that if Gene A is active, then Gene B or Gene C seems to be active. We can write that observation down as a mathematical expression if we can use the language of logic: If A, then B or C. Now, this is a very simple example, OK. It’s just to illustrate the point. We can encode truly rich expressions that actually capture the behavior of multiple genes or proteins over time across multiple different experiments. And so by translating our observations into mathematical expression in this way, it becomes possible to test whether or not those observations can emerge from a program of genetic interactions.
And we developed a tool to do just this. We were able to use this tool to encode observations as mathematical expressions, and then that tool would allow us to uncover the genetic program that could explain them all. And we then apply this approach to uncover the genetic program running inside embryonic stem cells to see if we could understand how to induce that naïve state. And this tool was actually built on a solver that’s deployed routinely around the world for conventional software verification. So we started with a set of nearly 50 different specifications that we generated from experimental observations of embryonic stem cells. And by encoding these observations in this tool, we were able to uncover the first molecular program that could explain all of them.
Now, that’s kind of a feat in and of itself, right? Being able to reconcile all of these different observations is not the kind of thing you can do on the back of an envelope, even if you have a really big envelope. Because we’ve got this kind of understanding, we could go one step further. We could use this program to predict what this cell might do in conditions we hadn’t yet tested. We could probe the program in silico.
And so we did just that: we generated predictions that we tested in the lab, and we found that this program was highly predictive. It told us how we could accelerate progress back to the naïve state quickly and efficiently. It told us which genes to target to do that, which genes might even hinder that process. We even found the program predicted the order in which genes would switch on. So this approach really allowed us to uncover the dynamics of what the cells are doing.
What we’ve developed, it’s not a method that’s specific to stem cell biology. Rather, it allows us to make sense of the computation being carried out by the cell in the context of genetic interactions. So really, it’s just one building block. The field urgently needs to develop new approaches to understand biological computation more broadly and at different levels, from DNA right through to the flow of information between cells. Only this kind of transformative understanding will enable us to harness biology in ways that are predictable and reliable.
But to program biology, we will also need to develop the kinds of tools and languages that allow both experimentalists and computational scientists to design biological function and have those designs compile down to the machine code of the cell, its biochemistry, so that we could then build those structures. Now, that’s something akin to a living software compiler, and I’m proud to be part of a team at Microsoft that’s working to develop one. Though to say it’s a grand challenge is kind of an understatement, but if it’s realized, it would be the final bridge between software and wetware.
More broadly, though, programming biology is only going to be possible if we can transform the field into being truly interdisciplinary. It needs us to bridge the physical and the life sciences, and scientists from each of these disciplines need to be able to work together with common languages and to have shared scientific questions.
In the long term, it’s worth remembering that many of the giant software companies and the technology that you and I work with every day could hardly have been imagined at the time we first started programming on silicon microchips. And if we start now to think about the potential for technology enabled by computational biology, we’ll see some of the steps that we need to take along the way to make that a reality. Now, there is the sobering thought that this kind of technology could be open to misuse. If we’re willing to talk about the potential for programming immune cells, we should also be thinking about the potential of bacteria engineered to evade them. There might be people willing to do that. Now, one reassuring thought in this is that — well, less so for the scientists — is that biology is a fragile thing to work with. So programming biology is not going to be something you’ll be doing in your garden shed. But because we’re at the outset of this, we can move forward with our eyes wide open. We can ask the difficult questions up front, we can put in place the necessary safeguards and, as part of that, we’ll have to think about our ethics. We’ll have to think about putting bounds on the implementation of biological function. So as part of this, research in bioethics will have to be a priority. It can’t be relegated to second place in the excitement of scientific innovation.
But the ultimate prize, the ultimate destination on this journey, would be breakthrough applications and breakthrough industries in areas from agriculture and medicine to energy and materials and even computing itself. Imagine, one day we could be powering the planet sustainably on the ultimate green energy if we could mimic something that plants figured out millennia ago: how to harness the sun’s energy with an efficiency that is unparalleled by our current solar cells. If we understood that program of quantum interactions that allow plants to absorb sunlight so efficiently, we might be able to translate that into building synthetic DNA circuits that offer the material for better solar cells. There are teams and scientists working on the fundamentals of this right now, so perhaps if it got the right attention and the right investment, it could be realized in 10 or 15 years.
So we are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter in the era of an operating system that runs living software.
Thank you very much.
Sara-Jane Dunn is a scientist working at the interface between biology and computation, using mathematics and computational analysis to make sense of how living systems process information.
TED: Sara-Jane Dunn
Microsoft: Sara-Jane Dunn
Length 12:25 | June 02, 2016
CRISPR gene drives allow scientists to change sequences of DNA and guarantee that the resulting edited genetic trait is inherited by future generations, opening up the possibility of altering entire species forever. More than anything, this technology has led to questions: How will this new power affect humanity? What are we going to use it to change? Are we gods now? Join journalist Jennifer Kahn as she ponders these questions and shares a potentially powerful application of gene drives: the development of disease-resistant mosquitoes that could knock out malaria and Zika.
Length 34:21 | Mar 13, 2019
When we think of information processing systems, we often think of computers, but we ourselves are made up of information processing systems – trillions of them – also known as the cells in our bodies. While these cells are robust, they’re also extraordinarily complex and not altogether predictable. Wouldn’t it be great, asks Dr. Andrew Phillips, head of the Biological Computation Group at Microsoft Research in Cambridge, if we could figure out exactly how these building blocks of life work and harness their power with the rigor and predictability of computer science? To answer that, he’s spent a good portion of his career working to develop a system of intelligence that can, literally, program biology.
Today, Dr. Phillips talks about the challenges and rewards inherent in reverse engineering biological systems to see how they perform information processing. He also explains what we can learn from stressed out bacteria, and tells us about Station B, a new end-to-end platform his team is working on that aims to reduce the trial and error nature of lab experiments and help scientists turn biological cells into super-factories that could solve some of the most challenging problems in medicine, agriculture, the environment and more
Andrew Phillips: So, what Station B is aiming to do is to develop a platform, a system, that will transform programming biology from what is currently a process of trial and error to something that’s systematic and predictable. And that requires bringing together many different pieces of the puzzle. In programming biology, there’s this sort of standard “design, build, test, learn” cycle. So, we’re trying to combine these different stages of programming into an integrated platform.
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: When we think of information processing systems, we often think of computers, but we ourselves are made up of information processing systems – trillions of them – also known as the cells in our bodies. While these cells are robust, they’re also extraordinarily complex and not altogether predictable. Wouldn’t it be great, asks Dr. Andrew Phillips, head of the Biological Computation Group at Microsoft Research in Cambridge, if we could figure out exactly how these building blocks of life work and harness their power with the rigor and predictability of computer science? To answer that, he’s spent a good portion of his career working to develop a system of intelligence that can, literally, program biology.
Today, Dr. Phillips talks about the challenges and rewards inherent in reverse engineering biological systems to see how they perform information processing. He also explains what we can learn from stressed out bacteria, and tells us about Station B, a new end-to-end platform his team is working on that aims to reduce the trial and error nature of lab experiments and help scientists turn biological cells into super-factories that could solve some of the most challenging problems in medicine, agriculture, the environment and more. That and much more on this episode of the Microsoft Research Podcast.
Host: Andrew Phillips, welcome to the podcast.
Andrew Phillips: Thanks for having me. It’s a pleasure to be here.
Host: So, you’re the head of the biological computation group at MSR in Cambridge. What’s going on in your group? Broad strokes, what big problems are you trying to solve? What gets you up in the morning?
Andrew Phillips: Well, one of the things we’re really working on is trying to understand how biological systems compute. So biological systems, like living cells, they actually perform information processing, but they compute via means that we don’t quite fully understand. So, my team is working on trying to reverse engineer how these living systems, biological systems, perform information processing.
Host: I want to unpack the big suitcase of biological computation a bit more. As you’ve noted, programming biology… it’s now new. But there are some new things that are going on. So, give us a short primer on biological computation. Where did this all get started and why, and where are we today?
Andrew Phillips: So, one of the things we’re focusing on is trying to understand how to program biological systems. But to do that, you need to understand how these systems function. If you wanted to try to fix a car, but you didn’t understand how the components worked, you couldn’t just randomly change those components and expect the car to work. So, in order to program a system, we need to understand how it’s working and, in our case, we need to understand how these cells are computing. Now, as a species, we’ve been using cells to do things for us for thousands of years. We’ve used yeast to make bread or to brew beer. And then, several decades ago, we were able to reprogram microorganisms to produce medicines, things like insulin. And now we’re taking this to the next level as a field, as a discipline, and programming organisms to do much more sophisticated things, make much more complex medicines, fuels and materials. So, understanding how systems compute is an important step to being able to program them more effectively.
Host: When did somebody discover, hey, we could actually make this do something for us or make it do something different than it does? Was that an “aha” moment or an accident, or did people try to start manipulating biology?
Andrew Phillips: Well, I think the first example of programming a microorganism to make a medicine, in this case, insulin, was in the seventies. And this was as we started to understand more about DNA, about how cells work. And then, more recently, we’ve been able to sequence the human genome. And now we’ve been able to write DNA. We’ve been able to write genes. So, there’s been this steady progress in technology that sort of underpinned our ability to program biology. And in fact, there’s been an exponential growth in our ability to read DNA and also to write DNA. And then, more recently, we’ve had some transformations in our ability to edit DNA through things like CRISPR. So, we have this underlying technology that’s allowing us to manipulate DNA, read, write and edit it. And that’s also underpinned this technological growth in our ability to program biology.
Host: Well, let’s talk about those underpinnings for a minute. There are some pieces that need to be in place before we can make significant progress. You’ve alluded to DNA. Are there other pieces that we really need to understand before we can move forward and make biology work for us more specifically? And even more predictably and less expensively?
Andrew Phillips: Well, yeah, I mean, there’s still a lot we have to learn in terms of understanding how biological systems function. So, biological systems are highly complex, they’re massively parallel, they’re probabilistic. In many ways, they’re closer to analog computing systems than the digital ones that we’re familiar with. So, we still have a lot of work to do to reverse engineer these systems. So that’s a challenge, understanding how these systems work. Another challenge is that we still lack a way of doing biological experiments systematically and reliably. A lot of experiments are done manually, they’re time-consuming, they’re error prone. And in fact, recent studies have shown that most biological experiments are not even reproduceable. And then the final challenge is that we actually lack the technology stack for programming biology. There isn’t really a systematic way. In many ways, programming biology is sort of similar to the early days of trying to program silicon before the advent of high-level languages and the fundamental theory of computing that we sort of take for granted today. So, we’re sort of still in the days of almost punch cards and very basic programming technology.
Host: That’s funny. So, you alluded, just now, to our much more advanced ability to read and write DNA. How has that impacted the growth in programming biology, and what limitations do we still face, aside from the things that you’ve just mentioned in terms of what we don’t understand?
Andrew Phillips: Yeah, so this technology has been hugely important and has enabled the progress that we’ve seen to date in programming biological organisms. So, by that, I mean reading, writing and editing DNA. But on its own, it’s not enough. So, we can read an entire genome, but we still don’t understand what most of it means. And we can write an entire gene, but we’re still unable to predict how that gene will behave inside a living organism. And we can now edit DNA with really high precision, with technologies like CRISPR, but we’re still unable to predict the consequences of those edits. So really, we’re still in a situation where programming biology is done by trial and error.
Host: So, what is it about biological systems that confounds our ability to program them? We’re coming at this from a computer science angle, so we’re basically talking about using programming languages to compile biological algorithms to DNA code instead of binary. Talk about the differences between how biological cells operate and how computer programs operate. What are the unique challenges that scientists face in programming biology?
Andrew Phillips: So yeah, essentially, biological programs operate in a fundamentally different manner to traditional silicon-based programs that we’re used to writing. So, you can think of a traditional computer program more like a recipe where you have a list of actions that happen in a particular order. You know, do step one, then step two. And maybe you’ll repeat this N times. Whereas biological systems, they actually compute via fundamentally different means. So, it’s more like a chemical soup where you have thousands of proteins interacting in parallel in a noisy fashion, and many of these interactions can go wrong with some probability. But yet out of all that noise emerges a fairly robust algorithm that is used to compute things like, when should a cell divide? Or how should an immune system respond to a foreign invader? Or even things like the internal body clock, which is essentially a combination of genes and protein interactions that computes a 24-hour period fairly reliably. So, these algorithms are actually very complicated for us to understand because we’re not used to that. We’re still trying to reverse engineer them.
Host: Let’s talk about noise for a second. You’ve just mentioned it, and you’ve recently published a paper about how bacteria use noise to survive stress. So, tell us about this. What insights did you gain from this research about noise and bacteria? And what are the implications for the work that you’re doing?
Andrew Phillips: So, this is one of many examples, actually, of how we, as a team at Microsoft Research, are collaborating with leading scientists in many different fields in universities. So, this particular collaboration with The University of Cambridge was with James Locke, the Department of Biochemistry Sainsbury Laboratory, and a joint PhD student, Microsoft funded, Om Patange. And we were looking together at trying to understand the role of noise in how bacteria survive stress. Now, stress, in this case, is not an emotional response. It’s more about, you know, if you are the bacteria are in adverse conditions, so you give them hydrogen peroxide or some kind of dangerous compound that could potentially kill them, how do they survive? And this work, you know, Om did most of the experiments for this and we looked together at the computational modeling side, is trying to understand how bacteria can actually anticipate stress and actually survive. And it turns out that bacteria are growing in a noisy fashion, and they’re also turning on a stress response sort of randomly. And this noisy growth and noisy stress response are coupling so that bacteria that are growing slowly are actually more able to survive the stress and also some fraction of the bacteria randomly decide to get into this state so that if a stress happens to be applied in the future, they actually survive. And so, this is a really kind of interesting example of how noise can perform a useful function for bacterial systems. But more generally, it’s one of the examples of how we’re trying to understand the mechanisms that bacteria and other living cells use in order to survive and process information more generally.
Host: So, you’re talking about bacteria, writ large, and we know that some bacteria is actually really, really bad. And we don’t want that to survive. Is there a way to parse out, hey, I’m going to, you know, provide some noise and stress to the bacteria that I want to survive?
Andrew Phillips: Well, yeah, it’s essentially trying to understand how a system works. Then we can try and direct it, reprogram it, depending on what we want it to achieve. So, if it’s a dangerous infection that we’re trying to eliminate, then we can understand where we want to perturb that system. For example, trying to overcome things like antibiotic resistance. And if it’s a beneficial bacteria, for example, the bacteria that lives inside our gut, we want those bacteria to survive because they provide tremendous benefits to us. And so, understanding the mechanisms that bacteria use in general can help us determine what strategies to use in the beneficial case and in the harmful case.
Host: What are the most promising applications of the research you’re doing? What’s is the field hoping for, before we start talking about some specific things that you are doing at Cambridge?
Andrew Phillips: Yeah, it’s a really exciting field. It’s often referred to as synthetic biology, where the goal is to program biological systems more systematically using engineering-based principles. And so, this field, as a whole, is moving forward rapidly and there are many applications that are actually currently making excellent progress, and there are many potential future applications. The ones that excite me most are actually in the medical field. Biologics, these are drugs made by reprogrammed organisms. And they essentially are the fastest-growing sector in the pharmaceutical industry, and they account for over half of industry revenues and annual drug approvals. And they’re actually some of the most powerful treatments we have for diseases like cancers that many traditional drugs, chemical-based drugs, are not able to treat. And so, these biologics, they’re too complex to be made by ordinary chemical means. And so instead, they’re made by genetically programmed organisms that act as living factories. And biologics also includes sort of more advanced treatments. One example is cell therapy, where you can actually reprogram a patient’s immune cells to target specific cancers, and there’s an example of a company, Oxford BioMedica, with whom we’re working, that, in partnership with Novartis, they’ve developed the first living cancer drug which essentially reprograms a patient’s immune cells to fight cancer with 80 percent patients in complete remission in the first trials. So that’s one of the most exciting areas. But there are also many other areas. Agriculture is another one. So, nitrogen fertilizer is responsible for five percent of global greenhouse emissions, and half of the fertilizer is washed away causing toxic pollution. And this company called Pivot Bio, they’ve essentially reprogrammed soil microbes to transfer nitrogen directly to the plant roots without emitting these greenhouse gases and with almost no pollution. So very little is washed away. These programmed microbes are actually performing extremely well in recent trials in the field. And then there’s a lot of potential for other industries as well, like construction. So, the cement industry accounts for about five percent of global carbon dioxide emissions. And there’s a company called bioMASON that’s reprogrammed microbes to produce cement at ambient temperatures so they can get rid of most of these emissions. And then textiles. So, the textile industry generates about a fifth of the world’s industrial water pollution mainly in developing countries, and this company called Colorifics, it’s an early startup, but they’ve actually programmed microbes to produce and fix dyes to fabric using ten times less water than traditional dying methods. And then, there are a whole load of other examples. For instance, in the chemical industry, so the company called Genomatica, they’ve actually programmed microbes to produce fully biodegradable plastics, and so now they can produce biodegradable plastics at scale to replace things like plastic bags. And then you look at the textile industry as a whole. We can program yeast to produce leather or even spider silk, so there’s a whole range of technologies that are really exciting.
Host: So, this research is incredibly ambitious. It takes a lot of brains, a lot of expertise. We’ll talk about partners in a minute. But I want to talk right now about the main project that you’re working on. It’s called Station B. So, we’ve identified some of the problems inherent in programming biology as well as some of the sort of individual trial and error attempts to solve them. But this is a much more comprehensive run at this hill. Tell us all about Station B. What is it? How’s it different? What’s it going to do?
Andrew Phillips: So, Station B is really motivated by all of the applications that I just talked about, right? And so, there’s this tremendous potential, but yet there are these tremendous, you know, barriers to achieving that potential. And the one I mentioned is just the fact that programming biology is primarily done by trial and error. And so, you know, there are many aspects to that that we can try and address. So what Station B is aiming to do is develop a platform, a system, that will transform programming biology from what is currently a process of trial and error to something that’s systematic and predictable. And that requires bringing together many different pieces of the puzzle. In programming biology, there’s a sort of standard “design, build, test, learn” cycle. So, we’re trying to combine these different stages of programming into an integrated platform. And in the design phase, we’re developing biological programming languages and compilers that can take programs written in a language that people can understand and compile them down into DNA, you know, code, that living systems can execute. In the test phase, we’re partnering with a company called Synthace. They actually specialize in lab automation. They’re one of the leading lab automation companies. And what they’re doing is developing device drivers and an infrastructure layer to actually make it much easier to program lab equipment, lab robots, to do experiments more systematically and reproducibly by digitally encoding those experiments as programs. And Synthace is actually built on top of Microsoft Azure Internet of Things technology. So, I’ve got design. We’ve got build. We’ve got test. And in the learn phase, we’re actually combining expertise in machine learning to analyze the data in order to learn models of how biological systems compute. So, we’re sort of proposing models, using machine learning to actually refine our hypotheses, and then storing that information, that knowledge, inside a knowledge base so that as we go around this “design, build, test, learn” cycle, we’re actually getting better at understanding how to program biological systems. And so the key point here is to try and bring together these different technologies. And over the past decade, almost, we’ve been working on individual methods, individual pieces, individual programming languages. And now with Station B, we’re trying to bring together the individual methods we’ve been developing and, you know, some of the breakthroughs that we’ve made, into this integrated system that will help our partners and collaborators become better at programming biological systems.
Host: So where is this now? It’s very much still in the research phase, yeah?
Andrew Phillips: That’s right. We do have a research prototype of this platform that we’ve developed. The next phase now is to actually work very closely with a selected number of partners in order to develop and apply this platform to specific challenges.
Host: So, let’s talk about those partners for a minute. You’ve got them across industry and academia. Who are you working with, and what kinds of things might we expect to see?
Andrew Phillips: Well, we continue to work with many university collaborators around the world on a range of specific research projects. But really, the first university collaboration involving Station B as a platform is with Princeton. There, we’re working with Professor Bonnie Bassler, head of the Molecular Biology Department, and also Professor Ned Wingreen, a biophysicist by training, on understanding the mechanisms of biofilm formation. So, biofilms are essentially surface-associated colonies of bacteria, and they actually kill as many people as cancer, and they are one of the leading causes of microbial infection worldwide and also an important cause of antibiotic resistance, which was recently highlighted by the World Health Organization as a growing crisis that we cannot ignore. So, what we’re trying to do is use the Station B platform to understand how biofilms form. What are the mechanisms that they use? And the platform, as I say, will combine programming languages and analysis methods to allow us to program microbial systems, perturb these microbial systems, measure the effects of those perturbations and try and reverse engineer how bacteria communicate and how they interact in order to form these biofilms. And then, by understanding the mechanisms of formation, we can seek to disrupt these biofilms, and potentially, hopefully in the future, that would give rise to new forms of treatment.
Host: And by biofilm, you mean slime.
Andrew Phillips: Yeah, that’s right.
Host: Well, I mean, let’s get real. So that’s fascinating, because one of the things that we think about when we think about what kills people and what’s bad, is disease. But where does the disease come from? So that’s what you’re addressing, right, is if we can get to the source, we can control more of it?
Andrew Phillips: That’s right. I mean, for many years, you know, the pharmaceutical industry has almost been forced to do things, again, by trial and error. There’s a disease and we have a hunch as to what molecules we want to target. And then, you know, pharmaceutical companies and researchers will just test the whole range of random compounds, see which ones stick, and then maybe put those in mice and then maybe eventually put them in people, without often knowing how these drugs are working. But now, as treatments become more sophisticated and as we get better at treating disease, it’s becoming increasingly important to understand how the treatments work, and that requires an understanding of how the disease or the pathogen works.
Host: This is so cool. Because if you look at science over the eons, it’s been, what happens if I put this with that? And your efforts here are to codify and shrink down that process of trial and error by using computer science.
Andrew Phillips: That’s right. And I do want to emphasize, you know there’s a whole field, and there are many people around the world working on this, and we’re, you know, a part of that field. You know, at Microsoft, we do have expertise, and many years of research and breakthroughs in biological programming languages, compilers, machine learning methods but we’re part of this growing field that’s really trying to solve some of the most important challenges facing humanity.
Host: So, who are some other partners that you’re working with in Station B, and what are you working on with them?
Andrew Phillips: So, our main other partner is Oxford BioMedica. And, as I mentioned briefly before, they essentially have developed technology to reprogram a patient’s own immune cells to target specific cancers. And they are the first company, together with Novartis, to actually have FDA approval for this type of treatment. And in clinical trials, 80 percent of patients who actually had no hope of surviving, many of them had had a bone marrow transplant or had gone through chemotherapy, 80 percent of these patients, when they received this treatment, were in complete remission. And the treatment has also been approved by the NHS, National Health Service, in the UK, but at a cost of £282,000 pounds per patient. And so, these treatments are really expensive. And part of our collaboration with Oxford BioMedica is to try and work with them to improve the ways in which these treatments are produced and try, by understanding how the cells are functioning, how the cells are producing the treatments, to actually bring down the costs, but also to help with the development, in the future, of new treatments. There’s a whole range of diseases, including diseases like Parkinson’s disease and others, which could benefit from this type of technology that Oxford BioMedica and others in the field are developing. And so, we’ve just started a collaboration with Oxford BioMedica to help improve the way that these treatments are produced and to look at ways of producing new treatments as well. What we’re doing is working with this company in particular to try and help improve their existing technology and bring down the costs and allow them to develop new technologies, which in turn will be subject to the rules and regulations of the industry. Oxford BioMedica, their treatment is saving lives today. And with our Station B platform, we are looking forward to working closely with them to help save more lives tomorrow.
Host: All right, Andrew, with all the promising futures, including winning the war on slime, your research is, at its core, about altering biology via computer coding. What could possibly go wrong?
Andrew Phillips: Yeah, good question. Well, as I said before, we are very careful about who we work with. And the two main partners we’re working with for Station B, Princeton and Oxford BioMedica, they are subject to, you know, very stringent regulations that they abide by. And they’ve been doing this work for many years. And, as I say, as new treatments are developed, then those treatments will go through the same, or even more rigorous, approval processes. So, we’re really working with the right partners to try to help them before more productive. So that’s one point. What’s also very encouraging is that governments are taking this technology very seriously, and they’re the ones who are setting the agenda and there’ve been counsels appointed by various governments to study synthetic biology and the desire to program biology more effectively. And this situation is constantly being monitored. And as regulations are produced, then our partners will abide by those. So yeah, we have to be very careful in that respect.
Host: Well, let me push in a little bit there, because we have so many best-case scenarios in front of us on how this technology could be really helpful in our lives. But I can think of several, if not numerous, outcomes that might fall in the dystopian bucket of technical advance. So even as you think about how governments and agencies can try to regulate this, is there anything that keeps you up at night?
Andrew Phillips: What keeps me up at night right now is all of these challenges we face as a species, you know, sustainability and disease and environmental pollution… That’s what currently keeps me up at night. And I see this technology as a way, as I mentioned in many of the applications I talked about, as a way to solve so many of these challenges. There’s also another issue, which is that, you know, what if we do nothing? So, nature itself, interestingly enough, is constantly evolving. Natural organisms are constantly mutating. Viruses are mutating. So, nature is producing new diseases, naturally, constantly. And we’re seeing, in some cases, resistance to medicines like antibiotics that have saved hundreds of millions of lives. These systems are now becoming resistant to antibiotics, and so we need to find new treatments. And so, if we do nothing, there is a real danger that a global pandemic breaks out that nature has produced through random mutation that we are unable to treat because we don’t understand how these systems work and we’re not able to develop the treatment in time. Or as these existing treatments start to fail because nature, again, is mutating and smart and outcompeting us and going around our treatments. If we don’t understand how to develop new treatments quickly enough then we’re in real trouble. So, I think there’s a real threat from nature itself. But there’s another important issue as well, which is, you know, as I mentioned before about the drug industry traditionally doing things by trial and error, and now we see this new potential still being done by trial and error. It’s going to be increasingly important to do things in a predictable way, to do things systematically, to be able to understand what we’re doing. I think with computer models, programming languages, machine learning, being able to close that loop between models and experiments, we’ll be able to predict, more and more accurately, the outcomes of the modifications we’re making so that we can be very careful about not making the wrong modifications. And if we get better and better at counteracting the bioterrorist that is nature, which is constantly throwing things at us, we’ll also get better and better at counteracting human endeavors which are trying to be malicious, because now we understand that if a random mutation happens or a deliberate mutation happens, we’ll be able to counteract it. So, I think it’s going to be really important to stay on top of this.
Host: Andrew, tell us about yourself and your academic background. You’re originally from Barbados, West Indies. You went to Toulouse, France, and now you’re in Cambridge, England. You’ve had quite a journey. What got you started, and how did you end up at Microsoft Research?
Andrew Phillips: Okay, so I was always interested in robotics, engineering. I was fascinated by machines that people designed. And so, I studied engineering in Toulouse, France, and then I got really interested in programming. And so, I learned computer science in Cambridge, did a PhD at Imperial College, in London, and studied concurrent, parallel computer systems. So, programming languages for programming these parallel systems, the theory and also the implementation techniques. And there, while at Imperial, I met Luca Cardelli, a scientist at Microsoft Research at the time. And he was of a similar background but a leader in the field of concurrent programming languages. And he was applying these to study biological systems, which are massively concurrent, and I got fascinated by this. And so, I did an internship at Microsoft Research, and then I was hired by Stephen Emmott, who was leading a team at the intersection computer science and biology. And that’s how I got started. Since then I’ve been trying to develop methods from computer science but that are specific to biology. And there’s been a lot of cross-fertilization there.
Host: So, did you actually come up with the programming language to translate from binary code to DNA code, as it were?
Andrew Phillips: Well, actually, I had an intern, very, very talented intern, back in 2009, Michael Pedersen. And we worked together on this very preliminary prototype of a programming language which he coded up and then we published a paper together and designed the language together. And since then, we’ve sort of been evolving and extending the language, and more importantly, trying to bridge the gap between what you write on a computer and what gets executed in a cell, and making sure that that’s more and more predictable. So, we started a long time ago. I think we still have a long way to go in the future, but we’re making progress.
Host: Is anyone else using your language?
Andrew Phillips: So, we’ve developed, actually, three main languages. One for programming systems at the molecular level, another at the genetic level, and a third at the network level. So far, we’ve had most success at the molecular level, because it’s much more predictable. This is sort of programming DNA systems to compute. And so yes, there are a number of people who have used that language. I’ve also taught some courses at this international, genetically engineered machines competition on using our genetic programming language. So yeah, we do have people using our software, but we are actually very careful about who we collaborate with…
Andrew Phillips: …and using the software mostly internally.
Host: Do you have names for the languages?
Andrew Phillips: Yeah, we have one, it’s called Visual DSD, DNA Strand Displacement, another one is Visual GEC for Genetic Engineering of Cells, and the third is RAIN, Reasoning About Interaction Networks.
Host: I love that. So, every so often I get a researcher on the show who has such an interesting side quest that we have to go there. I’m not even going to ask you about all the things you’ve done like snowboarding, kite surfing, Chinese kickboxing, Thai boxing – you’re just like this extreme guy. But you’re a qualified ballroom dance instructor and you were a member of the Imperial College Dance Team while you were getting your PhD. So, I just… I have to know, how did you get involved with the Strictly Ballroom set?
Andrew Phillips: Okay, so how I actually got started was, I was sort of looking forward to my wedding and wanting to make sure that I did a good job on the first dance. So, I thought I would attend a couple of ballroom dancing classes. But then I got invited to audition, and it all took off from there. So, I was part of the university team. We used to travel around the country and compete with other universities. It was great fun. We used to have lessons, you know, and practice several times a week. And I really enjoyed it. And then, you know, because we had to do all of that, I thought I could, as well, take the exams that qualify you to be a ballroom dance instructor. And so, I did those. Sadly, I’m not so much involved anymore. That was a long time ago.
Andrew Phillips: Uh, my best dance was, the waltz and also the foxtrot. I really enjoyed it. I still do the odd salsa from time to time.
Host: Awesome. All right. As we close, I like to ask my guests to leave our listeners with some parting thoughts. So sometimes it’s advice, sometimes it’s wisdom, sometimes it’s predicting the future. What would you say to aspiring researchers who might be interested in the field of computational biology? What are the big, open problems, and what kinds of people do we need to help solve them?
Andrew Phillips: Well, the first thing to notice is that it’s really an interdisciplinary endeavor. So, we need mathematicians, computer scientists, people with expertise in machine learning, programming languages, lab automation, and of course, biologists, experimental biologists. So, I would say that if you’re looking to get into this field, it’s really important to at least understand the intersection of these different disciplines or a subset of these disciplines. Someone who can do biological experiments but understands the principles of, say, machine learning, could really help make some of these exciting breakthroughs at the intersection of the two fields. The other thing is that I really think that programming biology is going to transform many of the industries that are in existence today. I think it’s a sort of an underpinning technology that will help transform medicine, food, energy, and build the foundations for a future bio economy that’s based on sustainable technology. So, it’s really going to be an exciting field, and I would encourage anyone with an interest to join.
Host: Come help us.
Andrew Phillips: Exactly.
Host: Andrew Phillips, thank you for coming on the show today, and sharing all the insights in programmable biology.
Andrew Phillips: My pleasure.
Dr. Andrew Phillips
Biological Computation Group at Microsoft Research in Cambridge
Length 1:27:12 | Spe 28 2016
For decades, biologists have read and edited DNA, the code of life. Revolutionary developments are giving scientists the power to write it. Instead of tinkering with existing life forms, synthetic biologists may be on the verge of writing the DNA of a living organism from scratch.
adjective: far on or ahead in development or progress.
SYNONYMS. progress, make progress, make headway, develop, improve, become better, thrive, flourish, prosper, mature. evolve, make strides, move forward, move forward in leaps and bounds, move ahead, get ahead. informal go places, get somewhere.
Typical definitions include these types traits:Civilization is characterized by five traits: specialized workers, complex institutions, record keeping, advanced technology, and advanced cities.
Free will is the ability to choose between different possible courses of action unimpeded.
Free will is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition. Traditionally, only actions that are freely willed are seen as deserving credit or blame. There are numerous different concerns about threats to the possibility of free will, varying by how exactly it is conceived, which is a matter of some debate.
Some conceive free will to be the capacity to make choices in which the outcome has not been determined by past events.
devices that enable its users to interact with computers by mean of brain-activity only
Gnosis refers to knowledge based on personal experience or perception. In a religious context, gnosis is mystical or esoteric knowledge based on direct participation with the divine. In most Gnostic systems, the sufficient cause of salvation is this “knowledge of” (“acquaintance with”) the divine.
Gnosticism says that humans are divine souls trapped in the ordinary physical (or material) world.
In the Gnostic view, there is a true, ultimate and transcendent God, who is beyond all created universes … To worship the cosmos, or nature, or embodied creatures is thus tantamount to worshipping alienated and corrupt portions of the emanated divine essence.
The rooster is a symbol of resurrection, and observance.
The rooster comes to open your eyes to the reality of your surroundings.
The rooster offers much light and illumination to their communities. The rooster spirit animal is closely linked to the god Apollo. In fact, the rooster is the sacred sign of the god of the sun. Also, the rooster is the sacred picture of the Zeus, the chief of the Greek gods.
The rooster is associated with confidence. It enables you to branch out on your own. You won’t feel the pressure of trying to blend in with others.
People with the rooster spirit animal do not hide in the shadows. You are not afraid to strut your stuff.
The rooster spirit enables you to share positivity with the world around you. Your community will be better off for it.
The rooster meaning in your life encourages a physic aptitude that you can easily tune. As such, you are able to harmonize your physical realm with your spiritual one.
People who relate to rooster totem are very optimistic. You are able to use the lessons from the past to make your present and future better.
The rooster is a time-keeper by nature. It serves as a wake-up call in your life.when you his voice in your slumber, it awakens you both physically and spiritually.
Your mind will be sharper for it.
Click image to get full size, then save.