Genomic Biomarker Selection for Companion Diagnostic Development

THURSDAY, SEPTEMBER 16, 2021 | 11am EST

Learn from Experts at Illumina, Gilead Sciences, Loxo Oncology, and Taiho Oncology on Leveraging NGS for CDx Testing

Molecularly targeted therapies are increasingly common in routine clinical oncology (Ersek et al., 2018). Precision molecular-based therapy often demands that patients’ underlying genetic cause of disease be determined to inform therapeutic decision-making. The intrinsic biological complexity of disease, combined with the complexity associated with exhaustive curation of clinical evidence, are barriers to efficiently and accurately identifying and confirming molecular biomarkers for inclusion/exclusion (I/E) criteria to support CDx development.

Comprehensive knowledge of the disease-associated variants, or Genomic Landscapes, supports CDx submissions by providing insight into disease-causing variants, their pathogenic mechanisms, and their actionability. By determining I/E criteria a priori, this data allows for more efficient and accurate trials, and maximizes the enrollment of patients likely to respond to therapy.

In this webinar, expert panelists from Gilead Sciences, Illumina, Loxo Oncology, and Taiho Oncology explore the utility of comprehensive Genomic Landscapes for precision medicine in oncology. Drawing on a wealth of experience in the oncology CDx space, our speakers highlight the current applications, upcoming trends, and future challenges of CDx development in the modern precision medicine era.

Topics:

  • How leading pharmaceutical companies are using genomic sequencing as part of their CDx strategy
  • Challenges associated with genomic biomarker selection for CDx development,
  • The emergence of AI and machine learning-assisted technology within genomics to develop inclusion/exclusion testing criteria, and
  • The benefit of comprehensive genomic landscapes in cancer diagnosis and treatment.

Speakers:

David Eberhard, MD, PhD
Senior Medical Director, Oncology Diagnostics, Illumina

Abdel Halim, PharmD, PhD, DABCC
Vice President, Biomarkers and Companion Diagnostics, Taiho Oncology

Scott Patterson, PhD
Vice President, Biomarker Sciences, Gilead Sciences

Anthony Sireci, MD
Vice President, Diagnostics Development and Medical Affairs, Loxo Oncology

Moderator:
Mark Kiel, MD, PhD
Co-founder and Chief Scientific Officer, Genomenon

Q&A

TRANSCRIPT

MARK: Hello, and welcome! I am your moderator for this joint presentation from Illumina and Genomenon: Genomic Biomarker Selection for Companion Diagnostic Development. My name is Mark Kiel, and I am the Chief Science Officer and founder of Genomenon, where we are organizing the world’s genomic information to drive clinical diagnosis and drug discovery. We have a full agenda to cover today and a great lineup of panelists. As a reminder, this webinar is being recorded and will be emailed to you after the event. Also, you can submit questions in the Q&A box in the “go to” panel. If we have time, we’ll cover some of those audience questions after our panel discussion.
The themes of today’s discussion will be the role of genomics in companion diagnostics, the challenges attendant with this relatively new discipline, and emerging trends for genomics in companion diagnostics with a focus on oncology. We will begin with a brief introduction from Dr. David Eberhard, the Senior Medical Director for oncology diagnostics at Illumina and our co-sponsor. David, I invite you to turn your camera on and begin.

DAVID: Thank you, Mark! It’s my pleasure to be here today. Our conversation is going to be about precision medicine. I really like the definition provided here: “The use of therapeutics that are expected to confer benefit to a subset of patients whose cancer displays specific molecular or cellular features.” These specific molecular or cellular features are what we’re talking about when we talk about biomarkers in cancer. This slide shows the rapid pace of the development and approval of biomarker-driven cancer indications over the past two decades. Really, the first biomarker-driven cancer indications came even before the turn of the millennium in breast cancer where ER/PR status was used to select anti-hormonal therapy for breast cancer patients. Also in breast cancer, the development of the anti-HER2 antibody, Herceptin, which has shown marked efficacy in breast cancer patients whose tumors overexpress HER2 or have amplification of the HER2 gene.
This slide we’re looking at, since the year 2000, this explosion really was catalyzed by discoveries in the lung cancer field. Particularly, lung cancer has been revolutionized by genomic biomarker discoveries over the past two decades. EGFR inhibitors Gefitinib and Erlotinib were introduced for the treatment of lung cancer early in the 2000s. Just around the time that they were being introduced, discoveries were made about EGFR mutations which sensitized lung cancers to treatment with these inhibitors. EGFR mutant lung cancers displayed remarkable sensitivity to these inhibitors. Subsequently, there was a huge amount of attention given to identifying the genomic landscape and describing the genomic alterations in lung cancers and other tumor types. These genomic landscape discovery efforts resulted in the identification of several key driver oncogenes, which are mutated with some frequency in lung cancer. These are shown to the left, including RET, MET, ALK, EGFR, BRAF, NTRK, etc.
Over the next two decades, after these driver gene mutations were discovered, selective inhibitors against these driver genes were developed and brought through the approval process. They are now coming onto market as targeted therapies, which are approved for use in lung cancer patients who specifically have these particular mutations in these driver genes. We can really see the fruits of these discovery and development efforts coming about in the right hand side of this figure, in the past decade, the approval of more and more biomarker-driven drugs and companion diagnostics for patient selection. Likewise, the development and introduction of immune checkpoint inhibitors, which, again, are targeted therapies against particular molecules that modulate the immune tumor cell interaction. The use of these therapies is also driven by the use of biomarkers to assess the presence of these targets in tumors.
Lung cancer has really shown us what can happen when we understand the molecular landscape and the mutational landscape, and develop drugs that target these selective mutations in tumors. We’re seeing this beginning to happen in other tumor types, as well. Here’s a figure showing a variety of other tumors for which particular genes are mutated and drugs are being developed, again, with remarkable efficacy when patients are selected appropriately with biomarker tests. Some of these biomarkers are specific for particular indications. Others are found in one or two or more different tumor types. Most recently have been the approvals of drugs in solid tumor indications, which are defined solely on the basis of a particular biomarker, not on the tumor type itself. These drugs include, for example, pembrolizumab, which is approved in solid tumors which have a high tumor mutation burden or high MSI status. There are also NTRK inhibitors for solid tumors which have NTRK fusions. In these cases, the tumor type doesn’t matter. The key thing is the presence of the targeted mutation.
Oncologists and patients are really enjoying the fruits of drugs targeted against these particular mutations driving their tumors. The biomarker testing is done by the pathology laboratory. The incredible pace of development and adoption of these different targeted therapies has created a challenge for the pathology laboratories, largely, how to best do testing for the biomarkers which are associated with use of these targeted therapies. These biomarker tests are necessary in order to appropriately treat patients who have these mutations. The traditional approach of the solid tumor pathologist has been to cut sections of the tumor onto slides and perform a test on each slide. Traditionally, this has really been a single-gene testing approach. This could be an IHC assay, a fluorescence in situ hybridization assay, or a single gene molecular mutation assay. With the adoption of more and more drugs that require testing for more and more different genomic biomarkers, this has created a bit of a challenge in how to best perform testing for the variety of biomarkers which are possible.
If we were to use the traditional approach here, a single gene approach, where each desired biomarker is tested and the result is returned to the pathologist, if that result is positive, great! The recommendation can be made for patient treatment. If the result is negative, then we go on and test for the next biomarker, and if that’s negative, we go on and test for the next biomarker. This is a serial iterative reflex kind of an approach. It has some disadvantages when we have such a number of potential biomarkers that need to be tested for, and particularly if the biomarkers are present at a relatively low frequency in the population. Tumor samples often are small and limited — by the time we get through each of these tests, we may run out of sample before we’ve been able to do all of the testing that is appropriate for that particular patient.
Secondly, it takes a lot of time to go through this iterative process, performing one test, waiting for the result, performing the next test, waiting for the result, and so on. If we went through the entire panel of genes, for example, that are approved in non-small cell lung cancer, this could take up to six weeks to perform testing, and up to 29 slides, if we have that much tissue left from that lung cancer sample. Jeff Conroy from OmniSeq and Roswell Park summed it up: “…with non-small cell lung cancer, the NCCN guidelines recommend a multitude of biomarkers be tested, and in some situations, this could require at least 30 slides…” Particularly in lung cancer, the available tissue may be only a very small biopsy. We frequently run into situations where we run out of tissue before all of the NCCN guideline recommended testing has been completed.
Molecular pathologists have been working on how to address this challenge of serial testing which results in tissue depletion and can take an unacceptably long time in order to perform all the required testing. What I’m showing here are three boxes. In each box, there are different gene variants, small variants: CNVs, fusions, splice variants on the left, and then a variety of genes which would need to be tested in non-small cell lung cancer. The top box kind of shows the single gene test approach that I was just talking about, where, for example, in this patient, if we wanted to test for a ROS-1 fusion, we would perform a FISH assay. That one assay returns that one single result, and from the grid, you can see that there are a lot more results that might be appropriate and important for the treatment of this patient. We want to be able to better assess all of those.
Next generation sequencing, or NGS, has enabled the development of gene panels. The first panels that were developed were small hotspot gene panels, where portions of the genes of interest were sequenced. The hotspots are where mutations that are of clinical interest most likely occur. This allows us to assess a wider number of genes of interest and gene variants of interest at the same time, but it still can miss those less common gene variants, which ultimately may be important for informing the management of any particular patient. Ultimately, where we’re going and where we are today, at the cutting edge of molecular genomics for therapeutic selection in lung cancer and other cancer types, is comprehensive genomic profiling. This is large-scale next generation sequencing based testing, which enables the assessment of all of the genes of interest and all of the different variant types in those genes that might be relevant, all at the same time. The testing result for all variants, all genes, is returned at one time from a single test from a single tumor sample. This can provide the most information the most efficiently, using one tissue sample, performing one test. One and done.
A quote from Jerry Wallentine from Intermountain Health: “The change from single biomarker testing to a comprehensive panel approach has been driven by a combination of factors, that include inherent efficiency of a single comprehensive panel, which is key among cancers and other sample types that have limited tissue.”
Evidence from real-world experience is starting to accumulate and be published, showing that this concept of genomically matching patients to targeted therapies or immunotherapies does result in patient benefit in real world settings. Here are a few studies looking at lung cancer: on the left is a study from 15 community oncology centers, comparing molecularly-matched therapies with cytotoxic chemotherapies, and demonstrating that match therapies — this is a biomarker-matched therapy — led to higher overall survival; 31.8 months in the match therapy group, compared to 12.7 months in chemotherapy group. In the center, another study of over a thousand non-small cell lung cancer patients, again, comparing molecularly-matched targeted therapy regimens to non-matched regimens and looking at overall survival, shows that match therapy patients enjoyed another 18.6 months overall survival compared to the non-match therapy arm, which was 11.4 months. On the right is an interesting study, a retrospective study looking at 101 lung adenocarcinoma patients who received comprehensive genomic profile testing. Of these patients who received comprehensive genomic profiling, 50 percent of those patients actually had a match therapy chosen as a result of that testing. Among those patients who received matched therapies, the overall response rate was 65 percent. This really shows that routine testing using comprehensive genomic profiling can quite often, 50 percent of the time, deliver a result which is actionable to the oncologist and the patient, and that results in a terrific response rate.
Because of this evidence that comprehensive genomic profiling and next generation sequencing testing do benefit patients in oncology, large panel NGS tests are becoming increasingly recommended by medical bodies in the U.S. (such as NCCN) and in Europe (such as ESMO) for standard of care, assessment, and treatment of patients. For example, in NCCN solid tissue guidelines — the wording is a little bit different in different guidelines — NGS testing is recommended in quite a number of different solid tumor indications, now shown on the left. More specifically, comprehensive genomic profiling or broad molecular profiling panels are recommended in six different indications at this point in time, and these are increasing fairly quickly. Also, tumor mutational burden (TMB) testing is recommended in the center in quite a number of different tumor types, as well. TMB requires a large panel NGS test in order to be performed appropriately. One megabase or more of a genomic sequence has to be interrogated in order to appropriately assess TMB. For these TMB recommendations, the implication is that NGS CGP needs to be performed. Likewise, the ESMO recommendations are going the same direction for targeted panels and CGP broad molecular panels.
Along with solid tumor testing using next-gen sequencing comprehensive genomic profiling panels, one really exciting development is our ability to use these same types of broad molecular assays to interrogate blood samples from solid tumor patients, where circulating tumor DNA, DNA that’s released from solid tumors into the bloodstream, can be isolated and sequenced. The tumor-specific biomarkers that can guide therapy can also be identified in blood samples as well as tissue samples. Our ability to perform this kind of testing in liquid biopsies and blood samples is complementary to our ability to test tissues. Right now, we’re viewing this in pathology not as replacements, one for the other, but they really are complementary techniques. An example: if we run out of tissue — earlier, we were talking about how tissue samples can be limited in quantity, and may not have sufficient tumor tissue or tumor cells available for testing — this is a situation when liquid biopsy can be very useful. If a patient is very frail or medically unfit for an invasive biopsy procedure, a blood sample could be used instead. Finally, a liquid biopsy has the potential for drug developers to have a more simple sample collection procedure, where patients can be screened for eligibility for enrollment into their molecularly guided clinical trials.
To summarize this introduction, comprehensive genomic profiling enables precision medicine by allowing us to comprehensively assess clinically relevant biomarkers that guide the use of targeted therapies and immunotherapies. CGP, comprehensive genomic profiling, can assess pancreatic cancer indications, a variety of different tumor types, a variety of different molecular targets, DNA, RNA, and the different variant types that we talked about earlier. CGP allows the consolidation of all of the different possible tests into one test that covers everything. Sample types can be either tumor tissue or liquid biopsies that look at circulating tumor DNA in the blood. These results maximize our ability to identify actionable variants. Clinical utility is now shown. CGP leads to molecular matched therapy selection and clinical trials with better clinical outcomes for cancer patients. Thank you for allowing me to introduce this!

MARK: Oh, that was great, David! I invite you to stay on with your camera and join the other panelists as I introduce them. Other panelists, as I call out your name, please turn on your video and unmute your microphone, and we’ll begin the conversation. I will introduce you by title, and I’ll ask you to introduce yourselves by way of background. Abdel, welcome! Abdel Halim is the Vice President of Biomarkers and Companion Diagnostics at Taiho Oncology. Scott Patterson is the Vice President of Biomarker Sciences at Gilead. And I think it’s not too familiar to call you Nino — Nino Sireci is the Vice President of Diagnostic Development and Medical Affairs at Loxo oncology. Welcome, everybody! Thank you, David, for that great introduction. I have about 15 extra questions that I’ve got besides the ones that I’ve come prepared for, but let’s begin by having the panelists introduce themselves. I’ll start with you, Abdel. If you wouldn’t mind telling me what you do at Taiho and what your experience has been.

ABDEL: Sure! At Taiho, I manage the biomarkers and companion diagnostics. Currently, I’m running two companion diagnostics for registration. Actually, both of them are next generation sequencing, for panels of 500+ genes. In addition to that, I am enabling patient selection in many early phase clinical trials, and many clinical trials including next generation sequencing. In addition to patient selection, as Dr. Eberhard just said, genomic profiling is now becoming more important for clinicians to manage their patients above and beyond targeted therapy. It enables the clinician to know the prognosis or to have an idea about the prognosis and patient monitoring using local biopsies. Thank you.

MARK: Thank you! Okay, Scott?

SCOTT: Great, thank you, Mark, and great to be here! Thank you for the opportunity. I have the privilege to lead in pharmaco-biomarker sciences at Gilead. That’s responsible for biomarker translational work and diagnostics across all therapeutic areas, from research to all phases of development, as well as into close marketing. We also currently have a registrational study with patient selection using NGS. My background is 13 years in academia, finishing up as a faculty member at Cold Spring Harbour lab. I joined industry 29 years ago into Discovery Research, the last 18, being involved in biomarker and translational work, and the last 14-15 building to diagnostics from the pharma side. I’ve been lucky enough to be involved in a couple of PMAs, De Novo 510(k), and multiple CIDD partnerships for a range of analytes, including therapeutic drug monitoring, soluble proteins, tissue based, and genomics, including, of course, NGS. It’s a privilege to be on the panel. Thanks for the opportunity!

MARK: Great! Welcome. Nino?

NINO: Hi, everyone! My name is Nino Sireci. I lead Biomarker Development and Medical Affairs at Loxo Oncology at Lilly. My group actually spans the gamut from early phase development focusing on biomarkers and translational work, through patient selection and CDx selection, through post-marketing and medical affairs as it applies to the diagnostic. In the clinic, we currently have one asset that has multiple CDx partnerships, including one with with Illumina on the TS 700 and others in development in the pipeline. Prior to joining Loxo, I was a molecular pathologist at Columbia University. I did clinical testing and worked with the Illumina platforms and Thermo platforms in my laboratory, so I have a very practical sense for what it means to actually bring on these platforms internally, to struggle not only with the technical aspect of the technology, but also with the financial burden of bringing on these platforms, particularly in the setting of limited reimbursement for non-CDx platforms. I’m happy to be here!

MARK: Yeah, great! Nino, I’ll start with you with the first question. We want to get to emerging trends toward the end of our discussion, but let’s begin in the past. Dr. Eberhard set it up quite nicely: Has NGS made your jobs easier or harder, and what are the ramifications of NGS overall from a patient improvement of treatment perspective? Nino, you alluded to some of them, net, has it been better or worse, or if it’s a mixture, clarify.

NINO: Well, I’ll put on my pharma hat first and say that there’s an opportunity to actually utilize the CDx regulations. With the purpose of CDx regulation, which is to ensure quality for our patients and to help sponsors feel confident that they’re selecting the right patients for their trials, there’s an opportunity, with the advent of large NGS panels, panels that are comprehensive, to actually be meaningfully useful within the clinical laboratory. I think in the past, CDxs that were single analyte or specific to one analyte, one drug, just weren’t reasonable to bring into the clinic. It just took too much money, they’re usually more expensive, too much time from a CLIA and uptake perspective. It just wasn’t reasonable, and so no one used them. I could say, as a pathologist, I would never have brought on a single analyte CDx into my laboratory. The advent of kitted CGP assays, like what Illumina produces, like what other manufacturers produce, actually helps make the CDx regulation and improve patient quality. From a clinical perspective, of course, there’s this idea that you’d have to use minimal or small bits of tissue for single analyte approaches. In an era where we have ten biomarkers in non-small cell lung cancer and growing in other cell types, from a clinical perspective, of course. It makes sense to take the tissue and do the most with it that you can. The struggles, or the pain points, have really been around implementation: getting high quality testing in the laboratory, getting alignment on reporting structure and interpretation, and frankly, getting paid for the work has been a real struggle. Even with the MCD by Medicare, we see limitations in coverage and reimbursement for large panels by private payers. We certainly see that globally, across Europe and Asia, that reimbursement is still an issue for these larger panels. We have a lot more work to do, but I think it is the right move, from a quality and from a patient care perspective.

MARK: Great. Scott, let’s focus a bit on the implementation aspects. I’m happy to let you build on any of the other points that Nino made, but if you could focus on what the challenges have been, from an implementation perspective, and we can start to branch out the conversation from there.

SCOTT: Sure. From the clinical trial perspective, obviously, having a comprehensive panel is a significant advantage, because you get to explore additional genes which you may have some ideas around from an exploratory standpoint, but you wish to gain more data.That’s a clear advantage. The challenge that we face, of course, is that, although we’re thrilled to see that comprehensive genomic panels are being introduced into patient care, as they should be, mostly as laboratory-developed tests, creates enormous challenges in the clinical trial space. Centers wish to utilize their existing tests. That becomes a challenge, then, when you wish to use the test that’s the market-ready version of the IBD that you wish to file to register your drug, that will be the test that demonstrates clinical validity, and this creates challenges. I understand those challenges, certainly from the clinical side as well, but it’s something that, really, until we’re at the stage where there are authorized registered diagnostics being used in all the clinical centers, we’re going to continue to face this. I mean, that will be a great scenario, when we have those platforms that are registered and we truly get to the stage of adding content. And not just adding content, but also adding clinical validity to biomarkers that are already part of panels that are being employed that are registered and authorized by health authorities globally. Of course, we’re going to see lots of changes in Europe. It’s going to actually raise that standard a lot. That’s where we see that the challenge is at the moment, but also a really great opportunity for the future. I think this ability to have the comprehensive genomic panels is one that many of us have wanted for a long time, and we’re very happy that they’re here, but there are some challenges to consider for implementation.

MARK: Yeah, when I was coming up in my research training, next generation sequencing was becoming a thing, and it was a great bonanza. I was wearing a separate hat on the clinical side in my MD/PhD training, and I recognized that there was a challenge to standardization and implementation and ensuring reproducibility across molecular diagnostics labs. Abdel, I don’t know if you want to build on that implementation challenge, particularly from the perspective of standardizing the implementation of these assays, as well as the interpretation?

ABDEL: Yeah, sure — I think I agree with all that has been said so far. Implementation probably is the current main limitation for next generation sequencing into clinical trials for companion diagnostics. No doubt that we have the science, we have the technology, and clinical value is proven. The clinician likes to have as much data about his patient or her patient so that he or she can monitor patients, look at biopsies, all these types of things. Prognostic biomarkers to know, at least, if the patient has any Q mutations or if they’re carrying mutations. This is very good to have, and has been proven. Implementation, from my perspective, has two faults. The first one is how the clinical CRO who enrolls patients is still working with the mentality of commerce. When it comes to some of them, even including the major CROs, they are still behind in understanding companion diagnostics and the implementation of companion diagnostics, approaching sides with ability to do companion diagnostics and targeted therapeutics. This is one aspect. The second aspect is the regulatory aspect, how to reach common ground between FDA reporting, CLIA reporting, and exploratory reporting. Because labs, especially the big organizations — I wouldn’t like to name any, but you know them — still working within the bureaucracy, but they are not open for innovation. They are not open for adaptations. They are just going to report whatever the diagnostic companion will be. For example, if it is easier for our T790M mutation, this will go in the claim as the drug target for this particular mutation. The FDA will say, “this will be your claim, and this will be your submission for IDE or significant risk determination and PMA.” At the same time, the clinician would like to have the other data from 500 plus genes. How to get them? There are two ways you’d have to report to them — under CLIA, for which you need CLIA validation. Some clinical labs are reluctant even to go down this path. That, or to report them as exploratory biomarkers, but you have to bridge the regulation to science. Exploratory biomarkers, if you report them under the umbrella of “exploratory,” this is not for medical decisions. Who is going to provide this data to the clinician as an IDE? How can the IDEs use whatever named exploratory data for patient management? So, implementation, as I said, I believe is twofold: one, related to a mentality of the CRO and the big organization CRO laboratories, and at the same time, how to reach a common ground between CLIA and FDA, and how to make this scientific data reach inclination with no legal liability or any aspects which you may have after that.

MARK: Let’s build on the liability aspect. David, one of the questions that I jotted down as you were talking was about comprehensive genomic profiling was twofold. The challenge with a great richness of information, particularly if you’re a pathologist — I was trained as a pathologist. It’s different than on the genetic side, where the hope is that you can make a clear-cut interpretation from a yes-or-no analysis. Can you speak to the challenge and how it’s being addressed of dealing with too much information from CGP, as well as the risk that we might run from false calls? I’ll invite everybody on the panel to weigh in as we start talking about the evidence at the variant level, and how that feeds forward into guidelines and guidance for clinical interpretation. David, if you wouldn’t mind answering that first question.

DAVID: Yeah, thanks, Mark. It’s a great question, and it’s very complex and rich. The part that I’ll focus on relates to something a couple of the panelists already talked about, the challenges of validation. It goes back to, what is validation? It’s demonstrating the evidence to support a claim or an assertion that we wish to make. Particularly, if that assertion is that there’s a relationship between this biomarker, between this result, and the outcome of a treatment action based on that result, recommending a therapy — what should I do, what should I treat a patient with, given this biomarker result? While scientifically, that’s very exciting, and we can think about pathways and relationships, and come up with ideas of what should be done or what we’d like to see tried in the clinic, we really can’t do it that way. We can’t be so experimental

MARK: We can’t be cowboys in the clinic, I’ve said.

DAVID: Exactly. I’ll say now that the FDA has been a great partner with us in the extremely complex challenges of developing comprehensive genomic profiling biomarkers and NGS, but there are a lot of really complex challenges to overcome. One is the validation of results. Abdel talked about being able to return all of the results back to the investigators in a trial in order to better understand what’s happening with those patients. That’s a hugely important question, but how do we validate those results so that we can trust those results? Then another aspect is, once we have those results, what are the clinical associations that we’re going to make? Particularly, if we get to the point of making a recommendation, I’m going to do something or not do something. There’s a fine line between the returning of a laboratory result and the interpretation of that result, sort of going into the art of medicine. That is a line which FDA makes clear that interpretation, annotation, etc., of results is something which really cannot be appropriately validated using our current tools. So we might separate those into, here’s the result from an FDA approved test or an IVD device, and then with these results, someone needs to interpret those. If we’re developing IVDs, we’d love to have a product that we could put right in the oncologist’s office, and they would pump out a report, and that report would be crystal clear about what to do, but in fact, it’s rarely that way. We need experts, pathologists, geneticists, really, an entire team of people who know about cancer genomics, who know about cancer patient management, who are familiar with each individual case, to come together and look at those results, discuss the results, and formulate recommendations for patient management. This is a molecular tumor board. What we are really finding now that we have the ability to generate these large-scale genomic results is that a big challenge is making these understandable and useful to the clinician, and not being cowboys, as you said, Mark, doing this in a very thoughtful way, that does benefit patients. Just now, Illumina and other companies are developing software solutions to really try to make this happen most efficiently, and to deliver end-to-end solutions, but I think right now, I don’t envision that there is available a software approach to replace the molecular tumor board. We can bring that data to the experts, but we need those experts to really synthesize the data and make the recommendations.

MARK: Scott, I think you’re about to jump in — is David correct? Are we going to continue to deal with ambiguity in the interpretation, or will we reach a point where we as a field have collected enough evidence at the variant level to rule in, for treatment or diagnosis, specific patients based on their genotype? So that question, or some other question that you had in your mind.

SCOTT: Sure, thanks, Mark. Yes, there’ll certainly be cases where you still need that evaluation, particularly when it’s something that’s outside of the clinical validity that’s been established around a specific gene and set of variants. The challenges we get, of course — and we’re grateful to the work that we’ve done with you, Mark, around this — are when we have a gene with a very large spectrum of mutations, where you’re never going to get the clinical validity on all of those mutations. That pipeline calling is actually what we see as the biggest issue between LVTs and companion diagnostics. This is something that others have found, and it’s really a challenge, and certainly will be more so revealed. When you have these genes with complex spectrum of mutations, is that really a deleterious variant? How are you going to interpret that? But when you’re coming in from the companion diagnostic side, it’s got to be defined. Even in that, there’s going to be some areas of ambiguity, how is that going to be handled? But again, I would raise the concern around pipelines being different. By the way, this has been shown to be an issue even with very simple sets of variants. The pipeline calling is quite disparate. I think it’s a very tall order to expect that pathology labs will have that bioinformatics expertise. It’s very challenging, and again, this is why that’s a particular concern that we’ve seen, and others have as well, let alone in complex genes, but those that have even very limited numbers of defined variants.

DAVID: I’ll jump in — I hear your point, Scott, and I understand that for the majority of clinical laboratories that exist in the community. As a molecular pathologist, our training is, in fact, to assess variants that don’t fit the general criteria of pathogenicity or the approved list of variants that exist on a label. I actually find CDx’s valuable, again, if they’re implementable within a laboratory and useful within a laboratory. I do, however, find that the lack of flexibility within that first page of a report can actually be quite damaging. There’s literature suggesting the opposite of what you describe, which is to say, you can be limited in what you find on a CDx, if you’re not looking for those other variants that are outside the scope of what’s in the label. Maybe that variant hasn’t been included in the clinical trial, but has every biological reason to be considered activating. That’s where tools like the Genomenon database or bioinformatics pipelines that can be tweaked and adjusted, with appropriate oversight from a professional, I think are actually quite valuable. Just speaking from my experience in the NTRK space, and now, in the RET space for some of our drugs, there is a wide spectrum of potential partner fusions. There’s a wide spectrum of mutations that occur in RET. We didn’t enroll all of those patients in our clinical trial, but it’s actually not a service to patients to limit use of the drug to those on a specific list that were tried in the trial. We need to behave like physicians and like scientists in the clinic, and make the right choices for our patients. I agree with your concern around the pipeline in LDTs, and actually think that sometimes, because of the flexibility, they can actually give our papers more opportunities.

SCOTT: Yeah, let me just clarify that if I could — because I probably didn’t state that very well — I agree entirely that there can be additional variants that need interpretation, and that should absolutely be the pathologist’s purview. What I’m saying is that it’s known, and it’s been shown that the labs that think they can call even the defined variants get them wrong. That’s my issue. Not the other side, it’s where it’s actually defined. There are issues because it’s the pipeline they’re using. As far as the interpretation of other variants, I agree with you entirely. I think that that’s critically important. The readout and the report should not be restricted. It should be stated, what’s clinically validated, or what could be on the label. It’s got to include that other information. That’s absolutely important, sorry I didn’t make that clear.

DAVID: I appreciate the clarification! I would say we’re living in a world right now where, even from a legislative perspective, we’re trying to balance this world of kitted, CDx-based, FDA-approved platforms, versus LDTs that may be approved by another mechanism. I 100%, wholeheartedly agree that there is no standard as of yet. Even CLIA really lacks the sort of teeth to get into some of the detail that’s required for pipeline analysis, and the rigor for the analysis. I think the answer is standardization of performance criteria, and all of us aligning as a field. What’s required for the analysis of a tumor sample is not necessarily always submitting to a CDx. If we were to agree on those criteria, I think we’d actually solve the problem that you just outlined, Scott, so thanks for bringing that up.

MARK: Open question to the rest of the panelists: do you see a role for regulatory bodies in helping sort this out, and guide where this trend might take us to the betterment of the patient community and providing the most appropriate care? Or do you see them as an impediment because of their historical experience not being up to date? Abdel?

ABDEL: Yeah, actually, I would like to emphasize more on the points about bioinformatics. There are three major deficiencies in the pipelines or bioinformatics tools that we are using now. First is the lack of standardization or lack of harmony between different bioinformatics systems. Each team, of course, they claim that they have the best. No one knows which is the best! Recently, I ran an exercise. I took the BIM files — not even the raw data or raw sequence — I took the BIM files from one of the nation’s leaders in liquid biopsies, and I gave it to another lab to run through their bioinformatic tools. Believe it or not, the rate of concordance was about one percent between the variants detected by the two organizations. As I said, one of them was the nation’s lead in liquid biopsies, one of the initial leads. The second part is the variant of unknown significance. Our science evolving every day. Variants of known significance are driven by some applications with no clinical validity, most of the time, as Scott just said. Genomenon can play a very important role, and similar organizations can play roles on how to identify and how to define variant significance, and how to link the detected variant at the DNA or RNA level with biological implications from mapping and signaling. The last point is, biomarker definition is also very, very discrepant between different labs. When you speak with the FDA, no one will say, “if you are going to use multiple labs, make sure that the biomarker definition is the same.” It’s very unusual to have a standardized biomarker definition. For example, Illumina tests gene fusion or rearrangement on RNA, but other foundations test in DNA, which makes the biomarker definitions completely different. So who can lead all this? Can the FDA or the NCI lead it? Or CAB? Or do we need somebody else to do this standardization between pipelines, biomarker definitions, variant allelic frequencies, etc? We need somebody to take the initiative.

MARK: Any takers in agreement or disagreement? One question that I had, to build on Abdel’s VUS commentary, we’re learning more all the time, and more is being promulgated into literature. We talked before about where, if the evidence is straddling where there’s sufficient evidence or insufficient evidence to make a definitive call on an actionable variant, what are we to do with newly emerging information? Especially if the FDA and regulatory bodies suggest that you need to lock the biomarkers down for inclusion a priori, how are we going to address that gap as new information is brought to bear?

SCOTT: I think, Mark, one opportunity, and this is difficult within the current regulations, we need to be able to capture that data into EMRs and track those patients, we need to build up this observational evidence. There are real opportunities, then, if you have confidence in the variant calls that are made, and that that’s in the patient’s medical record. You can start to accumulate this data over time to see what’s happening. Is that an opportunity to be able to go into an iterative mode, to expand knowledge around which particular variants are impacting patient responses/resistance, or actually hopefully getting a response? It’s very important to see how we can capture that information. To the point raised before, it’s critical to standardize. If we have harmonization, then we can be very confident in this data that’s being accumulated. We can have greater confidence, therefore, in submitting that to actually expand what the claims are, whether it’s worrying about the claims, or more importantly, the guidelines and patient treatment. If we’re able to capture that into EMR data, which can then be retrieved following those patients, I think there’s a great opportunity. It goes back to that previous conversation about how confident we are in accumulating that data and being involved in other efforts. It’s looking at samples, to how one can actually try to test platforms, irrespective of whether it’s an LDT or a companion diagnostic test, they can be based upon common samples. Then, that other point around the bioinformatics. Capturing that data into the EMR really could start to address the concerns that Abdel raised.

DAVID: I’ll jump in and just say that something about what you said really resonated, Scott. There’s one issue, which is expanding the label and including those variants in the label. There’s another, which is actual clinical practice. The one I’m most concerned about is actual clinical practice. How do we help our laboratories that are testing, even if they’re using a CDx, how do we help them keep up with the literature? How do we help develop information and knowledge around novel variants? That’s a bigger concern. Your idea around tapping into the EMR, it resonates with me. This is a problem in the clinical trial world in general, not just in precision medicine. How do we learn more about our drugs during the post-marketing phase of development, and how do we then feed that back into our development strategy, and feed that back to our clinicians? We have a lot to learn around how to capture and utilize real-world evidence and real-world data. That’s going to be sloppy, it’s going to be messy, but it actually has value to patients, and frankly, to our labels.

MARK: One final question. David, I don’t know if you have your prior engagement that you have to drop off for, but if you’re here, you can answer. Otherwise, I’ll solicit opinions from the other panelists. David talked about this concept. It’s not new, really, anymore: treating the mutation or the biomarker independent of the tissue type. How do we see that complicating implementation and interpretation of the success of a trial, especially when you might have disparate results across different patient types?

DAVID: Well, one thought is that it’s an attractive experiment that has to be approached carefully. Cancer genomics in the clinic has been one of these areas where we found that the more we know, the more we need to know. We’re very excited by the example of finding a mutation in a particular tumor type that can be treated effectively with the targeted therapy, like an EGFR mutation in lung cancer, but if you find an alteration in EGFR in another tumor type, will they respond the same way? Certainly, there’s been a lot of clinical research and clinical studies that have investigated this. Sometimes, we find that it works great. We can have, for example, NTRK fusions in a variety of different tumor types that appear to respond to NTRK inhibitors. That has developed into an approved pan cancer indication. But there are other times when we’ve learned, oftentimes from these clinical experiences, that different tumor types have different underlying biologies which we don’t fully understand. If we look at the RAS/RAF/MAP kinase signaling pathway in lung cancer versus colon cancer, those seem to be different diseases in a lot of ways. Particularly, when we think about refractory to treatment, or the emergence of secondary resistance pathways and mutations, colon cancers do things differently than do lung cancers when they’re treated with these targeted therapies, particularly in the RAS/RAF/MAP kinase axis. That’s one example. We have to be careful about overgeneralizing too quickly. These are ideas which deserve to be treated in a clinical research environment and explored, because that’s how we make advances, but we have to be careful not to get too far ahead of ourselves in assuming that what we think is going to happen clinically actually will happen clinically.

MARK: So I’m looking at the time, I’ll invite other panelists to weigh in on that. Otherwise, since that is a forward-looking statement that David provided in his answer, give me some insight from your perspective about where this trend will take us, and what kind of complexities might we encounter as we mature the use of genomics and companion diagnostics? I’ll begin with Nino.

NINO: Sure! First of all, thanks for inviting me onto the panel. It’s been a really good discussion. David, always a pleasure to hear about what’s going on at Illumina. From my perspective, the direction CDx regulatory oversight of assays is going, as it applies to trial enrollment, is actually to probably move away from a single test, single approval to class-based tests and approval, and frankly, setting criteria for performance rather than approval of specific assets. It’s just more reasonable. It actually meets sites where they are, it increases patient access to high quality testing. It allows trial sponsors like my company to actually accrue patients where they are, rather than having to take additional tissue and waste it on a central test. That said, on the clinical side of things, I think we’re really at a crossroads as to where the FDA will fall in terms of regulation of laboratory developed tests. Whether or not they’ll cede control or continue to cede control to local departments of health and CLIA, and to help modernize CLIA, or will adopt a more centralized review process for FDA. That currently remains unknown. It actually will have, I think, pretty large ramifications on the practice of molecular pathology, in particular, at academic medical centers like where I used to practice, and where a lot of us used to practice. I look forward to seeing how that turns out. You can clearly see from my soliloquy that I feel strongly one way, but I think in the next couple years, we’ll figure it out.

MARK: You’re entitled to your monologue. Scott, where do you see the future taking us?

SCOTT: I like that future that Nino described. I really hope that can occur. I actually think that one of the paths or drivers will end up being when whole transcript platforms are authorized. That will drive the requirement for this continuous learning, which is what we really need, and leverages what Nino says. I just couldn’t agree more. It would be great if we could not have to send samples to a central lab, create concerns and keep inconveniencing patients, let alone the physicians, in having to do this. I think that the whole transcriptomic sequencing is going to also be a driver. There’s going to have to be a different approach there. It’s a little easier when you have variants to take the current approach to regulation.Thanks for being on the panel! It’s been great.

MARK: Thank you. Abdel, any last words on the future of companion diagnostics and genomics?

Abdel: I think it will increase by any means. However, it’s a long way, and there’s a lot of work that needs to be done, from science implementation, especially regulatory. I will not repeat what Anthony and Scott said, and David. One aspect that we did not touch on is that, probably, part of the implementation and the attraction to big organization is the very low reimbursement, if there is any.

MARK: Yeah, definitely.

ABDEL: For example, while big organizations like Labcorp or Quest can charge five dollars per blood glucose sample, they can make more profits than to do the genomics. For the rate of failure and the quality control in CLIA, the rate of failure in blood glucose runs, it can be one in ten thousand, where the lab may fail. But the rate of failure in next generation sequencing can be as big as 25%. In order to have a sample reported, we need a lot of work and a lot of highly skilled people to do work. Reimbursement and quality control are very important aspects.

MARK: Great, that’ll be the substance for the next time we all get together. Sorry that we couldn’t get to it, because as you point out, it’s quite important. I just want to echo the gratitude that I have for having you guys join and share your experience and insight and expertise. David, Abdel, Scott, Nino, thank you so much for a lively discussion!
A reminder to all of our audience members: we’ll follow up with an email with a recording of this webinar. If you have any questions or would like to reach out to myself regarding Genomenon and our offerings, or to David and what Illumina is doing with respect to CGP, please don’t hesitate! Thank you, panelists. Thank you, audience. Until next time! Bye.