Webinar

The Role of Mixed Reality in Craniomaxillofacial Surgery – An Interactive Discussion about Our First Experiences and the Future Outlook

Temas
Augmented Reality
Ciclofosfamida, metotrexato y fluorouracilo
Realidad mixta
Idioma

English

Compartir esta página

Descripción

Brainlab invites you to join our webinar, “The Role of Mixed Reality in Craniomaxillofacial Surgery – An Interactive Discussion about Our First Experiences and the Future Outlook”, presented by our four international speakers:

Bradley Strong, MD, Professor and Vice Chair of the Department of Otolaryngology / Head and Neck Surgery, University of California, Davis School of Medicine Sacramento, CA, USA

Bernd Lethaus, MD, DDS, Director of the Department of Oral and Maxillofacial and Plastic Facial Surgery University Hospital Leipzig, Germany

Alexander Bartella, MD, DMD, Resident in the Department of Oral and Maxillofacial and Plastic Facial Surgery University Hospital Leipzig, Germany

Reinald Kühle, MD, DMD, Resident in the Department of Oral and Maxillofacial and Plastic Facial Surgery University Hospital Heidelberg, Germany

This webinar will cover topics including:

  • 3D visualization technology insights: Virtual reality, augmented reality vs. mixed reality
  • Interactive case discussion
  • How can clinicians use Mixed Reality? Insights gained from my first experience with MR
  • Discussion about future capabilities—What is the next level of Mixed Reality?

Already interested to learn more about Mixed Reality? Read our blog article!

We look forward to meeting you online!


Transcript

Jenna: Welcome to our Brainlab CMF Mixed Reality Webinar. We are all looking forward to an interactive discussion about the first experiences with our Brainlab Mixed Reality Solution for craniomaxillofacial surgery. Furthermore, we would like to discuss with you today where this technology could go into the future. My name is Jenna Lyda [SP] and we are live from the Brainlab in Munich in Germany. It’s a pleasure for me and for Brainlab to have you all here today in this webinar. We have speakers from different continents and participants from over 50 different countries on the global level. That already proves how important this technology is for craniomaxillofacial surgery.

Before I introduce our speakers and we get into the discussion, I would like to explain a few points. The lecture will last about 45 minutes, followed by a 15-minutes question and answer session. Those questions can be submitted only through the online chat function, and will be selected and addressed by myself to the speakers after the webinar has finished, in the question and answer session.

This webinar is live, but it will be recorded to be watched again by all registered participants when you will receive a special link after the presentation, where you can watch the video at any convenient time for yourself. For further questions, feels free to use the online chat function or send us an email to [email protected].

Now, onto our speakers. I’m very happy and proud that we have our speakers here today. And I happily welcome Professor Bradley Strong from the Department of Otolaryngology at the University of California at the West Coast of the United States. Furthermore, we have here, Professor Bernd Lethaus and Dr. Alexander Bartella from the University Hospital in Leipzig. And we also have Dr. Kühle from the University Hospital in Heidelberg, also from Germany. Very well welcome to all of you.

All of our speaker gained, this year, their first experiences with our Brainlab Mixed Reality Solution for preoperative planning in craniomaxillofacial surgery, teaching, but also patient education. They will now present their result right after each other, followed by an interactive case discussion. We will start with the presentation from Professor Strong from California. And I send a very nice good morning to you, Professor Strong. And I have to say, now, the virtual stage is yours.

Professor Strong: You’re able to hear me okay?

Jenna: Yes, we hear perfectly. Good morning.

Professor Strong: Good morning. All right. Well I’m gonna be starting with an overview of 3D visualization to kind of kick things off. And I thought I’d start with a quote. «If you always do what you always did, you’ll always get what you always got.» And I hope this talk sparks some interest in you of how you might apply new technology in your practice.

So what does 3D visualization mean and how do we apply it in our practices? And I thought I’d start with a patient of mine who, perhaps, a little more ingenuity than common sense. And here he is. And so he ends up coming to our emergency department, and I get this. Right? But I don’t get this. I get this, but not where I wanna be. So what I’d like to show you, is give you an example, maybe a typical scenario. A resident might call you and give you this CT scan. You’re looking on your PACs [SP] and you’re gonna see these images. And I’ll run it a couple times just so you can have a good look at what the patient looks like, so you can cement that in your mind.

Now, I would ask you, did what you just looked at, was that this patient? Or this patient? And I bet you’d have a hard time choosing which one that is, and if I surveyed you, it would probably be about a 50/50. And there’s a reason for that. The reason is, it’s very difficult for us to process 2D information and precisely put it together in our mind. It’s something we practice and get better at, but bottom line is, we have physical limitations. Why is that?

Well, if we look at a coronal CT scan, we see a piece of bone that might look at an N. We look at a sagittal, it might look like some kind of weird H. We look at an axial, and it may look like an F. And we can practice in our mind, and over years, we get better at this putting together 2D information. But we’re still not perfect at it.

Well computers are extremely fast and very good at doing this. So it can take 2D information, and compile it to make a very accurate 3D representation. And boom, you know exactly what you’re thinking about when you see this. And we can actually use this information to plan and execute our surgical procedures.

So everybody get this? Take home points. Okay. So take home point from the presentation. We can only treat what we can see. Right? We can only treat what we can see. And this is where I wanna introduce this concept of 3D visualization and extended reality. And we’re gonna introduce some terms here, and extended reality being one of them.

So when I talk about extended reality, I’m talking about real world, being on the right-hand side here. And then as we move to the left on this reality spectrum, we get augmented reality, Mixed Reality, and virtual reality. And all the way on the left, if you’re familiar with the movie called «The Matrix,» where the main character is experiencing one thing but physically in a different location. So the goal is to kind of go over some of these terms.

And another way to look at it is under extended reality in this Venn diagram, you have augmented and virtual reality with Mixed Reality being a combination of those two. And we’ll spend some time going over what each of these means.

So how do we apply? What are the opportunities in maxillofacial reconstruction? Presurgical planning is certainly one of them. I think it’s a key opportunity. Intraoperative visualization certainly has great potential and is starting to be used. Patient education. Whenever we’ve had these types of opportunities to educate our patients, they’ve always found it very, very helpful. And finally, medical education, training our young surgeons as they come up. This is gonna be a very powerful tool, I believe, in the future.

So architects, engineers, animators, they’ve been doing this for years, designed software and hardware specifically for what they need. The question is, what do we have? And at this point in time, we’re starting to have tools. And this one in the front here, Magic Leap, is the one we’re gonna be talking about today. And I wanna just break this roughly into two areas, 2D and 3D viewers. And just to touch on some of the 2D viewers, because I think it is interesting. Here’s an application on my phone, essentially. And my resident was sitting with me, and the application basically asks him to turn his head left and right. And it records the surface data.

And once we record that data, then we have it and we can look at it on the phone. So this isn’t truly a virtual environment, but it’s very interesting applications that are coming up, that are quick and easy. Now, if we move into true virtual environments, we’re talking about 3D viewers. And this is the slide you’ve seen with this reality spectrum. Now, let’s start on the right, just to give a big picture with augmented reality. And you may have seen this or heard about it in the past with Google Glass. And I think the key feature is that Google Glass or augmented reality, super imposes information onto the external environment, okay?

So an example of this would be a pilot is landing the plane, and is having information super imposed visually onto his visual field about air speed and altitude as the plane is being landed. Now if we hop to the other side of the reality spectrum, you have virtuality or complete virtuality, virtual reality. And this is something where the key feature is it separates the user from their environment. And this can be powerful and very interesting. And I mentioned earlier, an example might be the movie «The Matrix,» where the main character, Neo, is actually in a chair, but is experiencing something very different.


Professor Strong: So he’s physically in one place, but experiencing something else. And so I just wanna go into a couple other terms or introduce a couple other terms about virtual reality. There’s non-immersive and immersive. So I’m gonna show some examples. Non-immersive would be a scenario whereby you’re able to see around you.

And so this is myself with this skull that is super imposed on a cube. I’m looking on my iPhone, and as I turn the cube, you can see the skull turn and I can analyze the data. This is non-immersive. This is a non-immersive environment. While this represents an immersive environment. All you can see when you have the goggles on or whatever, is the skull and anything that’s labeled or depicted on that.

There’s also non-interactive and interactive environments. So a non-interactive environment would be one in which you’re in a room or a space, and you’re seeing something but you can only visualize it. You have no ability to touch, move or interact with it. While an interactive environment would be something like this where you’re able to manipulate the object that’s in space. So this is interactive. So this would be an immersive, interactive environment.

So I’ve given both sides of the spectrum. Now, I’m gonna focus down on the Mixed Reality environment, which is what we’re really focusing on today and explaining some of the advantages of. And the difference here is that both physical and digital coexist allowing interaction. So we can see our physical environment, we can see each other, and we can see the digital objects, and we can manipulate them in that environment. So that’s the advantage of a Mixed Reality environment.

So an example here might be Tom Cruise in «Minority Report,» where he can see his environment, but has objects that he can manipulate within that.


Professor Strong: So this is where what we’re talking about is this Magic Leap One, these headsets. And it involves, there’s a headset, which is quite light and comfortable to wear. It doesn’t inhibit you. There’s a power pack or a light pack, which you wear on your waist. You can see here, this is one of my neurosurgical colleagues, Dr. Shallay [SP]. And there’s the controller, which allows you to move, manipulate. And you can see these little lines. There’s laser pointers that you can use to manipulate the data or the object in space.

So one thing I’d like to leave with you is an understanding of the workflow of how you would get this into the headset and use it clinically, yourself. So the workflow starts with a common modality, you know, imaging, CT, MR. That goes to the PACS. From the PACS, you can use the Brainlab element software to manipulate the data. You can highlight, segment, you can bring in STL virtual objects. Once you have a plan that you wanna execute, then the software would depict a QR code. You put the goggles on and visualize the QR code, and that would then transfer the data to the headset and allow you to manipulate.

Now, these are relatively new, but I can tell you from my personal experience, every time I put these on another surgeon, they’ve just sort of been boggled and been very excited about thinking about the applications. They could use it either for patient education or presurgical planning. So nothing we show you today is actually going to represent how impressive…or, just it’s a very different environment that opens doors to different information transfer and educational opportunities.

So I’m gonna finish with a quick video here, where I was working with my neurosurgical colleague, Dr. Shallay.

Dr. Shally: Dr. Strong, this is a 74-year-old who’s been experiencing some headaches and visual decline. It’s been going on for about six months. His primary doctor decided to get an MRI scan and found this mass.

Professor Strong: So he does have a fairly well aerated frontal sinus, his adenoid sinus is very aerated. He doesn’t have a lot of sinus disease. His sphenoid has some interesting septation. So we would be coming fairly far forward on his skull base, and we can kinda turn it and try to make an estimate of where we would be. Yeah, I mean I definitely think that there’s potential for a endonasal approach.

Dr. Shallay: At this angle, I can see the back walls nicely.

Professor Strong: Oh, yeah, that’s really nice. I think coming up from below here, we should be okay to get to the front of this tumor. And stay away from that sinus. I think sometimes when these tumors are very far forward up here, it becomes a little bit of a challenge. Not only with the repair, but also that working angle. And this way, we can look right over the dome of the tumor. And as we pull this down, we can separate it from the arachnoid and pull it up under a direct kind of line of sight. So that’ll be very nice.

So that’s an example of how we’re using it clinically. So I’ll just finish up. If you always do what you always did, you’ll always get what you always got. And I hope this does spur some interest in you to think about how new technology can play out in your practice.

So I think I’ll pass this on to the team in Leipzig to kind of talk about some specific applications and sort of dig into this a little bit more. Guys?

Professor Lethaus: Hello. Thanks, Brad, for this impressive talk. I really enjoyed watching that. Really nice movie clips you’ve got in there. I’m a big fan of «Matrix» too, so I should have taken the blue pill, as we used to say. Hello to the world. Hello to everybody. Thank you very much for giving us this opportunity to give some of our small examples that we gathered together. This is my colleague, Dr. Bartella. My name is Bernd Lethaus. We are in Leipzig University in Germany. So we have 4:00 p.m. So it’s more a good afternoon from our point of view. And Dr. Bartella, he prepared a really nice short lecture about a case. And we also tried to put some scientific, yeah, evaluation on this topic to see, is it really a gain in what we do? Or is it just a nice toy? And I can assure that we think that this will put surgery, especially in the craniomaxillofacial part, on the next level. So I’m happy. You could start with a small lecture.

Dr. Bartella: Yes, hello, everybody, also from me. So we raised the question, is Mixed Reality the gap closer between virtual and augmented reality. Brad presented it before quite nicely. So there were some publications in the recent years about both, virtual and augmented reality. They both have their advantages and their disadvantages. And we raised the question, where is the role of Mixed Reality?

So technically, Mixed Reality makes it possible to screen, to display the three-dimensional dataset which is so far limited to two-dimensional screens, in the middle of the room and you can walk around. And our idea was to prepare a clinical case which is an oncological case with surgeons, but also with radiotherapists. And we discussed together, already in advance before we start the treatment, what our opportunities are.

So we had a patient, the initial diagnosis was a squamous cell carcinoma of the left maxilla. Afterwards, the pathologist revised it, and claimed that it was a Ameloblastoma. And you can see it here in the left maxilla, it’s painted in orange. It had a big extension. And the biggest question to raise was, is the orbital floor infiltrated? Because then we would have to remove the eye. So our plan was to resect the tumor, and to make fast sections of the periosteum of the orbital floor and check whether they are cancer cells or not, or whether they are cells from the Ameloblastoma or not.

Well I think I give it back to Professor Lethaus, he will say something about the surgical procedure that we prepared obviously in Mixed Reality.

Professor Lethaus: Yeah, well I think one of the biggest advantages with Mixed Reality is to get a better grasp of what will be the situation. What can you expect in this patient? I mean you can see this is quite a vast tumor this patient has. Originally, Dr. Bartella has said it already, we thought it was oral cell squamous carcinoma, not an Ameloblastoma. So quite malignant tumor. And we discussed broadly, in advance, what is the best approach to go there?

Obviously, if you could… Or you could do a [inaudible 00:21:17] to open the whole face, but this leaves not really nice scars. And sometimes you leave this hollow emptiness below the skin’s face. And then you get scar from it, and you get retractions and bad wound healing. So we discussed a lot, if we could maybe just a facial degloving and try to combine it with a nasal approach, a transantal approach. So we discussed this and that.

You can go in this patient… Can we look further? Now those are the resection margins we also put in there. So what is the actual volume that we have to remove? And then you can go in there and see which anatomical structures are to be removed? And also a second thing that we discussed is, how much do we have to remove from the mandibular. And you can see here that also the mandibular is something to be concerned about. And so we then decided that we could go in their early, and then just first take out some fast sections from the orbital floor to be absolutely sure that this is oral cell squamous. Which it was not, at the end. So we could leave the…we didn’t have to do an [inaudible 00:22:41].

And then we had chosen, I think a quite good approach. And then the second thing that we tried to integrate in there, is to get the radiotherapist into those discussions. Does it make sense? Can we operate this? What are your options to have a good regimen of therapy afterwards? So the three of us stood there in this room, as you have seen before with Brad, oncologist, surgeon and a radiotherapist, and discussed this case quite thoroughly. And I found this really…it gave something else, something in addition to this case. And we found it quite good. I think go on further.

Dr. Bartella: Yeah, so we started with the surgery. And here, you can see the resection of the tumor. And here, you can see something very interesting. We were not quite sure about three spots in the surgery, if we really resected the whole tumor and we put some tags in there. You can see in my right hand, it’s like a pen and they were for two reasons. First, we marked those spots also on the resection. But also the radiotherapist used the same software program. So if in case there would be one resection in the end, they could use a boost therapy for this very region.

And that’s what happened. Here, you can see that these are three spots we marked during the operation. And technically, in fact, the middle spot was positive, but we could remove further tissue to gain our zero resection.

So well this is the important thing which provides Mixed Reality for us, the combination and the interaction between several clinical fields in this particular case of surgery, radiotherapy and also pathology.

Professor Lethaus: I just want to mention or stress again the possibility to have the radiotherapist in the boat. Because we discussed this case afterwards, and we could show him exactly the points where we were not quite sure if we had reached a sufficient margin in this tumor resection. And this makes his life much more easy, as he told me, to see, okay, those are the points I have to concentrate on. Here, we need more radiation than in other cases. So he was able to put more radiation in those regions that we had marked and that we could clearly show him afterwards. And spare other tissue parts. And he said, «Okay, this makes my life easier and this is something we have to use regularly in patients with such huge tumors.»

Dr. Bartella: Yeah, so we were quite happy about it. But personal experience is the lowest level of evidence. So it was not the both of us alone, there were some guys with us in the room. There were three consultants of oromaxillofacial surgery, three residents, two radiotherapist and two students from dentistry. In total, 10 persons. And we asked them a couple of questions to rate them from zero being the worst rating and 10 being the best rating. And the satisfaction with Mixed Reality overall, was quite good. It was almost 10. We specifically asked for the occurrence of motion sickness. Because in virtual reality, this is what happens when multiple persons are in one virtual room and somebody’s moving around the dataset, all the others may feel motion sickness.

The auto segmentations of the maxilla and of the mandible and of the orbital floor of the cavity, as you could see before, it was really good in this case. We performed it with elements from Brainlab and it was satisfying to us. And the three-dimensional perception was overall, also very good.

Likewise, the same results we had with the displaying of the structures. I am not quite sure if you could see it before, but in fact, when you step in the skull, you can see the vessels of the tumor. And we could see the relation to the orbital floor, to the carotid artery, there we was…you know, we could pop in the safety margin and could see exactly how much millimeters in front of the carotid artery are we. And how fast can we go?

And we believe it’s improving our workflow, and it has the potential for a clinical implementation. Especially for the interdisciplinary approach with the radiotherapy, but also with the pathologist. And four of us had some experience with virtual reality. Just to remind you, virtual reality was the glasses you cannot see your outside world. So again, we just made what do you prefer mixed or virtual reality, the occurrence of motion sickness, as suggested before, was better or not available in the Mixed Reality population. The three-dimensional perception was likewise. There was no big difference. The displaying of structure was, due to the resolution, probably better in the Mixed Reality and the preoperative workflow. Because we could integrate it and the Brainlab navigation system was much better.

Likewise, the clinical implementation, especially because of the interaction with the other examiners in the session was rated better in the Mixed Reality glasses. And intraoperative usability could not be rated, because either was not used intraoperatively. And last but not least, we compared it also to the augmented reality. So motion sickness was not an issue in either glasses. Three-dimensional perception was preferred in Mixed Reality, and the displaying of structure of the improvement of the workflow, likewise.

So like, again, the potential for clinical implementation cannot be highlighted enough. I think this is what is the important point for us to integrate it, for example, in cancer boards or in preoperative discussion of patients.

As a conclusion for our talk, for our lecture, we believed Mixed Reality is a promising new technology that is able to display three-dimensional datasets. And it seems to have many benefits in comparison to ER and AR. I don’t wanna bore you with this long sentence, but this is the conclusion from our publication that just got accepted in the «Journal of Computerized Dentistry.» And I like to thank you for your attention, I’m happy to…or we are happy to answer your questions.

Professor Lethaus: And we are more than happy to give over to our colleagues from Heidelberg, Dr. Kühle.

Dr. Kühle: Hello, thank you very much for your introduction and your words. And thank you again for your very nice presentation. Especially also giving it a bit of an academic approach. We are presenting from Heidelberg, Germany here as well. And before we dig into Mixed Reality, I wanted to go a bit back to virtual planning. What is virtual planning for us? It’s basically learning on a model basis. And as you see, Watson and Crick on the right side. They, in 1951, put all the information that was there in the literature together, created a model, and of that model they decipher the DNA.

So how does that apply to us? It’s, basically for us, that all the slices of our CTs are little images that can be collected together to mirror the patient’s anatomy. We can thereby generate a digital twin of the patient in a program, like for instance, elements or other simulation software. We can mark pathologies. Or as Brad showed, plan complex reconstructions in trauma. We can create a navigation plan and maybe even use CAD/CAM guides to bring all that into theater.

How does that work on an exemplary basis? We, for instance, have a patient like the guys from Leipzig presented as well, with a malignancy of the upper left jaw. We’re able to use our imaging. We can segment structures, see the risk structures around. And then plan our resection according to our plan. Go into OR and to use facial degloving as well here, to have, if you can say so, micro invasive or a small invasive or at least aesthetical benefitting approach here.

And as the guys also mentioned already, the spatial data are really a game changer in oncology. Because you can’t just only take biopsies from a certain region where you’re not sure. You can basically do what is called data roundtrip. You can mark several borders around your tumor. And after pathology has rated them, adapt to your radiation plan. And I think especially reconstructive surgery in oncology, we have the opportunity that we can preserve our precious bone for reconstruction. Because usually, the radiotherapist focused the main radiation onto the construction. And thereby we have a clear situation where we can say, «This has been the cavity of the tumor. We resected that.» We could even do an entrope [SP] to get a hollow situation or a hollow of resection to avoid as such the radiation of all the construction. Or we can have a localized boost.

How does that look? In theater, as said, we use a transoral facial degloving approach here. And we were able to resect the tumor without too big of an aesthetical deficit for the patient. And then used a DCIA flap to give the patient a bony reconstruction for the dental implantation as well as the facial prominence. The very nice thing about navigation is in this case, that we don’t have to use [inaudible 00:34:11] to see if our reconstruction has worked. Because we can segment all that, we can the segment the tumor, we can segment the bony structures, and just go over and we’ll start and see if our reconstruction is a place we want it to be. So it’s also a radiation reducing.

And if we see the outcomes, we have an aesthetical and functional outcome, which is quite sufficient. You can see the maxilla is reconstructed by part of the DCIA flap. We have the facial prominence reconstructed. And that’s a young patient, four weeks after surgery with a quite well status, now getting implants at the moment.

So if you wanna sum all that up into one word what brings virtual planning and navigation to us, it gives us predictability. That is a very good in delicate surgery, because we don’t have any mistake on the patient. We can go into virtual planning, we try hard to reconstruct it and play a bit around and do all the mistakes we don’t wanna do on a patient, in that virtual planning surrounding. As you see, in that case, we had a planning. We were able to conduct that planning quite well to get a functional [inaudible 00:35:26].

So what do you need as a surrounding? You need planning software elements, as I said. Other possibilities, you need visualization in the operation theater or for the planning section, you need navigation to go into theater and bring that planning in. Or you get step up when you do this, and we just bought our new navigation system and several…three of those glasses, you can use the [inaudible 00:35:53] which has been presented here.

And in the first place, we got our glasses and we just wanted to first and then first problems come up, because it’s not too easy to implement those things into your system. It’s quite possible, but there’s a protection. And IT, our guys are not too happy to bring a device into their system very fast. If you wanna [inaudible 00:36:18]. So after a while of getting those things integrated, and to go through all the cases with Brainlab colleagues, and our boss was first laughing a bit at us as we were playing around a bit. But then he stepped in and put one of those glasses on, and I think he thought, «Wow, this technology is quite promising.» And it just gives a very immersive feeling to you that you’re able to stand around one of those plannings and you’re able to move it, to just see it from different directions.

And so after we were able to implement those glasses into our system, we tried our first case. I think we’ll just let the video roll for a few minutes, to just see how we interact and how that works out. And I’ll be quiet for a second.

Okay, so today, we’re talking about a patient with a tenderness of their joint cell tumor. There’s special focus here on the location of that tumor, which is in the TMJ, infiltrating the skull base as well as part of the [inaudible 00:37:34] here. We have segmented the relevant [inaudible 00:37:39] up front to just talk about where to do your colleagues have to talk about the operational approach. If you come closer here, then you see [inaudible 00:37:51] internal artery is directly on the posterior median side of that tumor. The vein as well is very close to that. And the main spatial nerve is also here.

So we thought of approaching it by resecting the [inaudible 00:38:14] and putting in a patient-specific implant to get this sorted here. The problem is that’s probably the most complex for you guys from neurosurgery. But we have a skull base infiltration very close to the trunk, very close to the internal carotid artery. So got a lot of precision work here that needs to be done to get all around here and to get it [inaudible 00:38:44].

We thought of maybe approaching it lateral approach, which would be the coronary approach here and a facelift approach, cut wise. Yeah, and I think the hardest thing will be to get the borders free. If you want to have a look at the CT scan here.

So after we had our first session, then we were able to simulate or to show where the tumor was, we went for planning of the surgery, and then we decided to go for a titanium implant for the skull base. So we construct that to a patient specific [inaudible 00:39:38] of the TMJ, which was individualized as well. Went into OR, and we were even able to get the approach as we planned to, we were able to [inaudible 00:39:52]. And just neurosurgery and back up to bring in that implant and to get the tumor well out, as said on the visualization.

So if you wanna have a look at what Mixed Reality can deliver for us, I think the collaborative planning for complex procedures is in the very focus at the moment. And I think especially in those cases where different approaches from ENT [SP] guys, from neurosurgeons come together this [inaudible 00:40:29] especially, we can discuss that in a nice round. We can give a very immersive feeling for the sense of trauma to see where we are, where we need to get. Orthognatic surgery I think will be a game changer as well. Because orthognatic patients, it’s always hard to see where they change.

For virtual planning, we have all the data there. It’s all there. We just need to implement that into the Brainlab system and to another system. To maybe give a bigger picture, a nicer picture for their change. What would it be good for? That’s the question. But this is limited by [inaudible 00:41:10] at the moment. I think super imposing resection plans or our plans to reconstruct will be the goal to go for. At the moment, the problem is that the resolution isn’t really good enough. We are not ever to register from the glasses, that’s technically possible. But without service recognition, we’re having problems in spatial precision. Object navigation is still possible.

So if you’re talking about patient-specific implants or [inaudible 00:41:44] I think our step stones need to be that we need to improve the precision and super imposition of information. And thereby we’ll hopefully walk into a future where neurosurgeons have the ability to [inaudible 00:41:59] information that we planned on, and have the opportunity to get better surgery for our patients, to get clearer borders, and to maybe achieve all our goals with better predictability.

So thanks again from Heidelberg. I hope you got a good picture of our using Mixed Reality. I’m very happy to answer questions with all the other guys, I guess.

Jenna: Heartfelt thanks, Professor Strong, Professor Lethaus, Dr. Bartella, also Dr. Kühle. It’s just amazing for us to see to which extent you are already using this Mixed Reality in your daily business for craniomaxillofacial surgery. We already received a lot of questions. And I will address them in a second. But I would like to, as a reminder, all participant, that you can still send questions. If you have any question, just type it into the chat function, and we will make sure to still address those within the webinar to the speakers. So Dr. Kühle, Dr. Bartella, Professor Strong, Professor Lethaus, we’ve got a few questions. And I think the first one goes specifically to the talk from Dr. Bartella. Doing the assessment of the Mixed Reality, did you control for the novelty effect? So MR is new and fancy, of course. Did it have an impact on the participants?

Dr. Bartella: Thank you for this very interesting question, which is, yeah, technically addressing the core of our data question. Well actually in this particular case, we planned it already preoperatively, together with the radiotherapist who joined the session. And we also discussed where we have to be very careful about with structures within. And obviously after the surgery, we met again with the guys, and it was a long discussion whether or not we radiate the patient and if we need the boost. And it was really a gain for the interdisciplinary approach. I hope this is addressing your question.

Jenna: I think so. Thank you for that answer. There’s another question that goes more to the topic of Dr. Kühle. What’s the main drawback of current Mixed Reality that you would like R&D to solve in the future?

Dr. Kühle: Well thank you for that question. I think as wishes we would like to have, and I think starting if we’re looking into using it in an operation theater is the registration and the super imposition and the quality of that. So that would be my greatest wish to have the possibility to have my information in a solution that allows me to be sure that I’m at the position I want to be. At the moment, we’re talking about five to six millimeters in super imposition, so we’re looking into developing [inaudible 00:45:08] devices, I think that is one thing that can help us overcome the insufficiencies in operation theater.

In explanatory situations or in student education, I think a bit of an opening towards integration of other planning software, or at least their data would be nice. As I said, automated surgery is a topic we are looking a bit into, and I would be very happy to use several forms of data to display and to interact a bit more with them. For instance, like an morphing or a movement of the segment to show that. So that’s basically the two things I would really like to see in the future.

Jenna: Thank you for that input. I think that goes perfectly in line with our Brainlab strategy. And I’m sure we will have more conversations with you, but also of course with Heidelberg and also with Professor Strong. There’s one more question. Feel free, whom of you want to answer. So the question is, how much time was spent in the preparation for preoperative planning? Do you feel this is practical for the majority of practice?

Professor Strong: I mean I can kick in. I think that workflows are improving all the time. So this is a relatively new technology. And any time you have a new technology, it will take more time to things set up. But as access gets greater, as workflows get easier, I think the potential is there for patient education. I mean that’s just a slam dunk that if we can show the patients a better representation of what we’re planning, they’ll be more comfortable and understand it better. Education wise, certainly. So I think it is just a matter of streamlining the workflows and at that point, it will become much more accessible for many different types of practices.

Dr. Kühle: And if I may just add one thing. Workflows can be optimized. For instance, we’re always doing navigation for complex skull base or mid-face oncology patients. And if you’ve got your planning software adhering to your patch system, then the import of the data is not a thing. It’s just there. You just mark the pathology. And if you’ve got a server that is connected to your navigation system, it’s just there on the navigation system. And if you want to log in, you just put in your Magic Leap glasses. So I think the interaction is reduced by the centralization of information transfer. And if you’re able to have a fast workflow, the Mixed Reality glasses are an extra work. You just display it in another way.

So the preparation for segmentation to do that in that minutes maybe is a thing of maybe a quarter hour or half an hour, if you put a bit of [inaudible 00:47:58] in it. But it’s not a long procedure.

Dr. Bartella: [crosstalk 00:48:04].

Jenna: Sorry.

Dr. Bartella: Sorry. Just to add a very last thing. In the past, we had iPlan. And actually no offense, it takes some time to mark the tumor and all those anatomical structures. But in Elements, the outer segmentation is running really good. So it was reducing our time I think from one and a half hours to maybe 10 minutes preparation. So this was a big step forward, in my opinion.

Jenna: Happy to hear that. Maybe for those participants that are not familiar with our planning software, so iPlan was our previous planning software. We now switched partially already to the Elements, which was what Dr. Bartella just mentioning, that we now changed the segmentation and improved the algorithm. So this goes much quicker for the preoperative planning. Dr. Kühle, there was also one question. So the segmentation that you showed in the video, can you give a rough estimation of how long it took you to prepare that case, to have it then shown in the Mixed Reality?

Dr. Kühle: I need to guess a bit, but the tumor as it’s shown in imaging with the contrast fluid, it is quite easy to do. Structures of the facial nerve trunk needs to be done a bit manually. So that may take up to maybe 10 minutes for the facial trunk. With good CT, there’s automated segmentations, for instance, the arteries. I guess the whole segmentation may have taken maybe half an hour, maybe three quarters of an hour. But mainly due to after segmenting of the small things like the facial trunk, as said, tumor usually done fast. If you’ve got contrast fluid in the arteries, that is done very fast as well. And the useful feature of the Element that there’s automated segmentation of the anatomic structures just gives you that with a click. So it’s just the structures we need to see that are not in the general [inaudible 00:50:12] you need to do, I guess maybe some [inaudible 00:50:15].

Jenna: Thanks a lot. Professor Strong, there’s a question to you. Any insights on how the Mixed Reality displace are compared to regular stereoscopic 3D screens?

Professor Strong: The thing that jumps out at you most I think when you wear the glasses is your ability to walk around and walk into the objects. So you and the other colleagues, as you saw in the video, it looks sort of funny when you don’t have the object in front of you. But when you’re in the environment, you can walk into it and literally slice in and look at it from different angles. So I think it’s much realistic. You can move and interact in the environment. So there’s much more utility than just projecting on stereoscopic screens. So it’s a full jump up from what we think about in the past with stereoscopic screens.

Jenna: Thank you. Another question into that direction. So you’re all working in university hospitals. Do you see that this technology will change the medical student’s life?

Professor Lethaus: Absolutely, absolutely. This will be a game changer. Because you can teach things like in real life. As Brad said, you can walk through the patient. You can put different angles on. And you can let the students get a totally different vision. They get a different vision from what you try to teach to them. And it’s much more easy if you use that tool. So I think… Imagine you have like 30 pupils there. Everybody has this in one room, and everybody has these goggles on. I mean you have to be careful that they don’t run into each other. Because everybody is so excited about the image. But it would be great, yeah. We’d have to hire gym halls to do it.

Dr. Kühle: Yeah, [inaudible 00:52:29] I think. In a simulated education, practical education is sometimes very inadequate. So some students see a lot of that kind of procedure like orthognatic surgery, some see a lot of oncology. But some of them are just standing there holding two books and don’t see anything of surgery. I think the immersion we get with the Mixed Reality glasses, maybe gives them the possibility have a more level, more integrated experience and so can deliver some experience we cannot do in real life. Especially corona times, maybe you can do online teaching with showing them things that happened in the OR. So I think it’s a very promising technology as well.

Jenna: Thank you.

Professor Strong: Yeah, I mean I would agree that like that video of the 2D CT that I showed, when we represent these things in three dimensions, everyone has a different ability to sort of put together two dimensional data in their mind, and different…you know, with practice it gets better. But from an educational standpoint, to have it compiled and in front of you in three dimension is such a powerful tool. And I think it just accelerates the learning curve very rapidly.

Professor Lethaus: We had a good before this meeting, all four of us, that actually it’s so difficult to show you the power of this tool. You have to have it on your own head, and you have to see through it. Because it’s like opening a different door. Now we’re trying to…we already really excited about it. And I can imagine that some of you will say, «Well what are they talking about? I mean nice pictures.» But really, we are giving this small view through a keyhole in a totally different world. And it’s difficult to show you what this can do. And you seriously have to really have it on your own head to see what we mean.

Professor Strong: Agree.

Jenna: Thank you for that. If there’s now any participant thinking about he wants to use the Mixed Reality, feel free to either write us in the chat or write an email to [email protected]. We will make sure that you can also try out the Mixed Reality glasses. They are easy so we can travel with them, or they can be sent to you so that you can try them out. There’s some questions and I think the next question is a more critical question. So what are the main obstacles for using Mixed Reality, from your perspective?

Dr. Kühle: Maybe as we encountered some integrating that technology, I think if you really want to have them in your system as your navigation system, there are a lot of protocols that need to be okay with the data protection, with the IT guys. You need to have a surrounding to have [inaudible 00:55:21]. You need to have a bit of space around them. So it’s not a thing you just have in your pocket, put on and you just use it while sitting at your desk. So you need a bit of a surrounding to use them to the full extent. And I think that is something you have to have, because if not, you could not use it as it is. It’s basically a bit like the Holodeck in «Star Trek.» You can walk around, you can see things, and that brings the full immersion. Just sitting around with the glasses on your head, you will probably not deliver that.

Professor Strong: I mean like any other physical object, it needs to be where you wanna be. And you need to have enough of them if you’re gonna be in a large room with multiple people. And I think right now, cost is certainly a factor. I mean you can’t deny that. And so as things get streamlined, cost is gonna come down. It’s gonna be easier for us to have multiple of these units. So that certainly is a factor, and I only see that getting better as we move forward.

Jenna: Thank you. We received one question. I’m not sure if that is actually a question to me or to one of you. So the question is, how did Brainlab make the rendering so hyperrealistic? And I think that also goes into the direction, what Dr. Bartella said about the previous software version and the new software version. So for that, I think it’s really important to know that we have an automatic segmentation tool. So within a few seconds, you can put in your dataset from any patient, and then you get a patient-specific segmentation for every skull bone of this patient. And this happens within seconds.

And I think Dr. Bartella also mentioned that. So this goes really quick and fast. And since we have a new collaboration with the company called Level Ex, they are now part of the Brainlab family, this is even more important. Because this also changed the way how we can visualize, for example the tumors, for example the bony structures. And I think Dr. Bartella, maybe you I think realized mostly the difference between the two software versions. Maybe you can tell something on that to the participants.

Dr. Bartella: Well we started a couple of years with Brainlab, and there was this online thing called iPlan, where you had to log in. And it was taking quite a long time, especially when you begin to mark, because you had to scroll through all the slides and adjust the margins of the tumor, of the maxilla, of the mandible, whatsoever you want it to display. And now there’s this new program called Elements. And you have suggestions that are popping up like mandible, maxilla left, orbital left. And you just press those buttons. And sometimes have to adjust a little bit, but more because due to the different density and high and low vascular raised areas. There might be some minor changes necessary. But in total, it literally goes within minutes. And this is a big advantage I think.

And yeah, I think this also allows this precise displaying of the structures, because it automatically runs through the margins. And that’s why you can see so sharply the vessels on the tumor.

Jenna: Perfect. Yeah, I think what you also realized is that new Level Ex technology that we have now really improves also how bone and tumors look like. So in the past, the tumor was like a colored bubble. When now it really shows the structure of the tumor, than instead of just having that bowl. And I think this is really improvement in that you can also of course then in Mixed Reality. I think we received a few questions on a very similar topic. And the question here is, would you the technology for patient education? If you would use it for patient education, how would you integrate it into your workflow? And do you think patient could benefit from that and better understand their diagnosis and their treatment? And I guess this is a question to all of you.

Professor Strong: This is a slam dunk, and it’s yes. I think from a starting standpoint, you could have one set of goggles and have patients that you wanted to work with them, go into that room. Ideally, you’d have one in all the rooms. But from a patient education, medical, legal, the whole just understanding the process. And if patients are more comfortable with you and the way you’re describing things, you know, surgical success is related to expectations in a lot of cases. The patient understands what success is and what the plan is, this is gonna be a very powerful tool. I’m convinced that this will be one of the earlier applications, at least in the U.S., that I think we will be using this for.

Dr. Kühle: Yeah, for instance, if you look into success of operations, it’s mostly survivable, how precise that surgery been. But more and more often, we see patient reported outcomes as a measure of how success you are, or how the patient will see that. And for the patient, there’s always a problem to understand, what’s really happening? What’s the problem? If you’re talking about a mid-face tumor, that’s not visible for the patient. It’s hard to grasp. The same thing for augmented surgery. And I think if we want to improve patient reported outcomes, and not calculate how precise our reconstructions have been without [inaudible 01:01:23], we can open the black box to them. Maybe give them an understanding of themselves and their anatomy and their problems. And I think it will improve because the patient will more know what awaits him and where that problems. So a clear yes from me, and I’m very happy with the slam dunk definition, yeah.

Dr. Bartella: Actually I see it a little bit more critical. I believe in automatic surgery, as you mentioned before. I think it’s brilliant. And also we are planning to do it with cleft patients and to try to describe the parents better what we are planning to do and what is the problem. But especially in these mid-facial tumors when they are close to the orbit, we are not entirely sure how much to teach the patients. Although in smaller tumors, obviously it’s really good to tell them we have to remove it. But as soon as it goes to the skull base or to the orbit. In our case, we didn’t present it to the patient. But the potential is there, especially for the non-oncological cases, in my opinion.

Professor Lethaus: On the other hand, I mean it’s got quite potential for the informed consent, because this is also sometimes an issue. I mean you can clearly show the patient what is on the stake, I mean what can you expect you, what are the dangerous areas we are close to or approaching. And this gives them better understanding about the operation. Maybe we have to be willing to have the patient to say then, «Okay, then I don’t do it then. Because this is really dangerous. I understand that it’s dangerous.» But we have to look. But at least it’s an honest informed consent, so maybe we have to be open to see what will happen. We lift the education from the patient on a much higher level. Sometimes it’s really difficult to tell them what we mean. Because in our minds, it’s clear. We do it every day. It’s our daily business. But in their imagination, it’s really difficult. So everything which helps to get the patient on the same eye level, I think is a good time and it’s a good tool.

Dr. Kühle: That’s a very good point. Because language, as we specialists talk about things, is a thing that not every patient can relate to. And so images do not lie. Images give you a real picture. I guess you’re very right in that point.

Professor Strong: I think it levels the playing field, really. So yeah.

Jenna: Okay. Thanks a lot. I think patient education is a very interesting topic. And I’m happy to talk with you maybe in six months from now to see how your first results have been patient education and what do you think how Mixed Reality can be used for that. And I think we received the very last question that I would still like to phrase. And the question is, so when can we Mixed Reality for image-guided surgery? And I think of course to Brainlab, when we will offer that to the market. And for sure, we will have another webinar on that time that this will be disclosed. But maybe it will be interesting to know your opinion, if you see that this will be beneficial for you as a surgeon.

Professor Lethaus: Absolutely. That’s a real slam dunk. Absolutely, I mean imagine this, a nice picture with the eyeglass learning from the airplane. I mean you have this guidance, and go a little bit to the left. And then be careful. We know this from aviation that this will help us. I mean this is proven already that if you push the limit a little bit more to augmented reality, this will make us better surgeons. I’m absolutely convinced of that, yeah.

Professor Strong: Reinald, do you wanna comment on your registration? And certainly those hurdles?

Dr. Kühle: Yeah, I kind was [inaudible 01:05:24]. I think as navigation had it probably in the beginning, precision is a thing. I think we need to workarounds to have the registration better. We see a lot of deep learning and neural networks being able to process picture in a very fast way. And I think we probably need to have a lot of that calculation power, talking of AI into recognition of what we actually see at the moment. Because using those three little bubbles there to look at a patient, it just gives us the bone. But it doesn’t give us a surface. If we just, for instance, change the operation [inaudible 01:06:04] we don’t have registration anymore on that [inaudible 01:06:07]. So I guess we either need a lot of AI or raised federalization power to get that sorted more. Or we need a workaround with project devices and stuff like that. So I guess we’re looking into at least one or two years of development. We started a project now with the University of Munich to develop those devices but it’s still in a very early stage. So I hope we’ll maybe see that in one or two years, at least for several applications.

Professor Strong: And I think the potential exists for being able to visualize things without incisions or with very small incisions. Because you can determine where things are, you know, seeing through the soft tissue. So the potential is huge. I do think it’s the next step past the education and presurgical planning, and these things that are low-hanging fruit. And then that’s sort of I think our next step is to be able to move into the operating room and actually accurately register our data and minimize perhaps our incision size. And work through some very small incisions to access fractures, tumors, whatever. That we can just visualize through the skin without making larger incisions, potentially.

Jenna: Thank you so much for that input. I think our R&D team will also watch that session, and I’m happy to open anything that we can disclose soon. Then we will have another webinar on that. Again, thank you so much for that impactful webinar. And thoughtful discussion. We now finished the webinar, so there are no questions left. If there is any question coming still through your mind, I’m happy if you send us an email to [email protected]. We hope you all enjoyed the webinar. And we’d like to remind you that we will also soon have our next webinar on the 29 of September with Dr. Zimmerer from Leipzig. So he just recently changed to that team and he will talk about TMJ navigation.

If you are curious on our webinars and news from Brainlab, please follow us on our social media channels. And drop us an email to [email protected]. Professor Strong, Professor Lethaus, Dr. Kühle, Dr. Bartella, thanks a lot again for all your preparation, for putting that together. I think that was a wonderful webinar and I’m looking forward to the next one. Everyone, stay safe and healthy. Thank you and goodbye.

Group: Bye.

Ponente:

Bradley Strong, MD

Department of Otolaryngology/Head & Neck Surgery, University of California, Davis School of Medicine Sacramento, CA, USA

Bernd Lethaus, MD, DDS

Department of Oral and Maxillofacial & Plastic Facial Surgery University Hospital Leipzig, DE

Alexander Bartella, MD, DMD

Department of Oral and Maxillofacial & Plastic Facial Surgery University Hospital Leipzig, DE

Reinald Kühle, MD, DMD

Department of Oral and Maxillofacial & Plastic Facial Surgery University Hospital Heidelberg, DE

Transcripción del vídeo

Jenna: Welcome to our Brainlab CMF Mixed Reality Webinar. We are all looking forward to an interactive discussion about the first experiences with our Brainlab Mixed Reality Solution for craniomaxillofacial surgery. Furthermore, we would like to discuss with you today where this technology could go into the future. My name is Jenna Lyda [SP] and we are live from the Brainlab in Munich in Germany. It’s a pleasure for me and for Brainlab to have you all here today in this webinar. We have speakers from different continents and participants from over 50 different countries on the global level. That already proves how important this technology is for craniomaxillofacial surgery.

Before I introduce our speakers and we get into the discussion, I would like to explain a few points. The lecture will last about 45 minutes, followed by a 15-minutes question and answer session. Those questions can be submitted only through the online chat function, and will be selected and addressed by myself to the speakers after the webinar has finished, in the question and answer session.

This webinar is live, but it will be recorded to be watched again by all registered participants when you will receive a special link after the presentation, where you can watch the video at any convenient time for yourself. For further questions, feels free to use the online chat function or send us an email to [email protected].

Now, onto our speakers. I’m very happy and proud that we have our speakers here today. And I happily welcome Professor Bradley Strong from the Department of Otolaryngology at the University of California at the West Coast of the United States. Furthermore, we have here, Professor Bernd Lethaus and Dr. Alexander Bartella from the University Hospital in Leipzig. And we also have Dr. Kühle from the University Hospital in Heidelberg, also from Germany. Very well welcome to all of you.

All of our speaker gained, this year, their first experiences with our Brainlab Mixed Reality Solution for preoperative planning in craniomaxillofacial surgery, teaching, but also patient education. They will now present their result right after each other, followed by an interactive case discussion. We will start with the presentation from Professor Strong from California. And I send a very nice good morning to you, Professor Strong. And I have to say, now, the virtual stage is yours.

Professor Strong: You’re able to hear me okay?

Jenna: Yes, we hear perfectly. Good morning.

Professor Strong: Good morning. All right. Well I’m gonna be starting with an overview of 3D visualization to kind of kick things off. And I thought I’d start with a quote. «If you always do what you always did, you’ll always get what you always got.» And I hope this talk sparks some interest in you of how you might apply new technology in your practice.

So what does 3D visualization mean and how do we apply it in our practices? And I thought I’d start with a patient of mine who, perhaps, a little more ingenuity than common sense. And here he is. And so he ends up coming to our emergency department, and I get this. Right? But I don’t get this. I get this, but not where I wanna be. So what I’d like to show you, is give you an example, maybe a typical scenario. A resident might call you and give you this CT scan. You’re looking on your PACs [SP] and you’re gonna see these images. And I’ll run it a couple times just so you can have a good look at what the patient looks like, so you can cement that in your mind.

Now, I would ask you, did what you just looked at, was that this patient? Or this patient? And I bet you’d have a hard time choosing which one that is, and if I surveyed you, it would probably be about a 50/50. And there’s a reason for that. The reason is, it’s very difficult for us to process 2D information and precisely put it together in our mind. It’s something we practice and get better at, but bottom line is, we have physical limitations. Why is that?

Well, if we look at a coronal CT scan, we see a piece of bone that might look at an N. We look at a sagittal, it might look like some kind of weird H. We look at an axial, and it may look like an F. And we can practice in our mind, and over years, we get better at this putting together 2D information. But we’re still not perfect at it.

Well computers are extremely fast and very good at doing this. So it can take 2D information, and compile it to make a very accurate 3D representation. And boom, you know exactly what you’re thinking about when you see this. And we can actually use this information to plan and execute our surgical procedures.

So everybody get this? Take home points. Okay. So take home point from the presentation. We can only treat what we can see. Right? We can only treat what we can see. And this is where I wanna introduce this concept of 3D visualization and extended reality. And we’re gonna introduce some terms here, and extended reality being one of them.

So when I talk about extended reality, I’m talking about real world, being on the right-hand side here. And then as we move to the left on this reality spectrum, we get augmented reality, Mixed Reality, and virtual reality. And all the way on the left, if you’re familiar with the movie called «The Matrix,» where the main character is experiencing one thing but physically in a different location. So the goal is to kind of go over some of these terms.

And another way to look at it is under extended reality in this Venn diagram, you have augmented and virtual reality with Mixed Reality being a combination of those two. And we’ll spend some time going over what each of these means.

So how do we apply? What are the opportunities in maxillofacial reconstruction? Presurgical planning is certainly one of them. I think it’s a key opportunity. Intraoperative visualization certainly has great potential and is starting to be used. Patient education. Whenever we’ve had these types of opportunities to educate our patients, they’ve always found it very, very helpful. And finally, medical education, training our young surgeons as they come up. This is gonna be a very powerful tool, I believe, in the future.

So architects, engineers, animators, they’ve been doing this for years, designed software and hardware specifically for what they need. The question is, what do we have? And at this point in time, we’re starting to have tools. And this one in the front here, Magic Leap, is the one we’re gonna be talking about today. And I wanna just break this roughly into two areas, 2D and 3D viewers. And just to touch on some of the 2D viewers, because I think it is interesting. Here’s an application on my phone, essentially. And my resident was sitting with me, and the application basically asks him to turn his head left and right. And it records the surface data.

And once we record that data, then we have it and we can look at it on the phone. So this isn’t truly a virtual environment, but it’s very interesting applications that are coming up, that are quick and easy. Now, if we move into true virtual environments, we’re talking about 3D viewers. And this is the slide you’ve seen with this reality spectrum. Now, let’s start on the right, just to give a big picture with augmented reality. And you may have seen this or heard about it in the past with Google Glass. And I think the key feature is that Google Glass or augmented reality, super imposes information onto the external environment, okay?

So an example of this would be a pilot is landing the plane, and is having information super imposed visually onto his visual field about air speed and altitude as the plane is being landed. Now if we hop to the other side of the reality spectrum, you have virtuality or complete virtuality, virtual reality. And this is something where the key feature is it separates the user from their environment. And this can be powerful and very interesting. And I mentioned earlier, an example might be the movie «The Matrix,» where the main character, Neo, is actually in a chair, but is experiencing something very different.


Professor Strong: So he’s physically in one place, but experiencing something else. And so I just wanna go into a couple other terms or introduce a couple other terms about virtual reality. There’s non-immersive and immersive. So I’m gonna show some examples. Non-immersive would be a scenario whereby you’re able to see around you.

And so this is myself with this skull that is super imposed on a cube. I’m looking on my iPhone, and as I turn the cube, you can see the skull turn and I can analyze the data. This is non-immersive. This is a non-immersive environment. While this represents an immersive environment. All you can see when you have the goggles on or whatever, is the skull and anything that’s labeled or depicted on that.

There’s also non-interactive and interactive environments. So a non-interactive environment would be one in which you’re in a room or a space, and you’re seeing something but you can only visualize it. You have no ability to touch, move or interact with it. While an interactive environment would be something like this where you’re able to manipulate the object that’s in space. So this is interactive. So this would be an immersive, interactive environment.

So I’ve given both sides of the spectrum. Now, I’m gonna focus down on the Mixed Reality environment, which is what we’re really focusing on today and explaining some of the advantages of. And the difference here is that both physical and digital coexist allowing interaction. So we can see our physical environment, we can see each other, and we can see the digital objects, and we can manipulate them in that environment. So that’s the advantage of a Mixed Reality environment.

So an example here might be Tom Cruise in «Minority Report,» where he can see his environment, but has objects that he can manipulate within that.


Professor Strong: So this is where what we’re talking about is this Magic Leap One, these headsets. And it involves, there’s a headset, which is quite light and comfortable to wear. It doesn’t inhibit you. There’s a power pack or a light pack, which you wear on your waist. You can see here, this is one of my neurosurgical colleagues, Dr. Shallay [SP]. And there’s the controller, which allows you to move, manipulate. And you can see these little lines. There’s laser pointers that you can use to manipulate the data or the object in space.

So one thing I’d like to leave with you is an understanding of the workflow of how you would get this into the headset and use it clinically, yourself. So the workflow starts with a common modality, you know, imaging, CT, MR. That goes to the PACS. From the PACS, you can use the Brainlab element software to manipulate the data. You can highlight, segment, you can bring in STL virtual objects. Once you have a plan that you wanna execute, then the software would depict a QR code. You put the goggles on and visualize the QR code, and that would then transfer the data to the headset and allow you to manipulate.

Now, these are relatively new, but I can tell you from my personal experience, every time I put these on another surgeon, they’ve just sort of been boggled and been very excited about thinking about the applications. They could use it either for patient education or presurgical planning. So nothing we show you today is actually going to represent how impressive…or, just it’s a very different environment that opens doors to different information transfer and educational opportunities.

So I’m gonna finish with a quick video here, where I was working with my neurosurgical colleague, Dr. Shallay.

Dr. Shally: Dr. Strong, this is a 74-year-old who’s been experiencing some headaches and visual decline. It’s been going on for about six months. His primary doctor decided to get an MRI scan and found this mass.

Professor Strong: So he does have a fairly well aerated frontal sinus, his adenoid sinus is very aerated. He doesn’t have a lot of sinus disease. His sphenoid has some interesting septation. So we would be coming fairly far forward on his skull base, and we can kinda turn it and try to make an estimate of where we would be. Yeah, I mean I definitely think that there’s potential for a endonasal approach.

Dr. Shallay: At this angle, I can see the back walls nicely.

Professor Strong: Oh, yeah, that’s really nice. I think coming up from below here, we should be okay to get to the front of this tumor. And stay away from that sinus. I think sometimes when these tumors are very far forward up here, it becomes a little bit of a challenge. Not only with the repair, but also that working angle. And this way, we can look right over the dome of the tumor. And as we pull this down, we can separate it from the arachnoid and pull it up under a direct kind of line of sight. So that’ll be very nice.

So that’s an example of how we’re using it clinically. So I’ll just finish up. If you always do what you always did, you’ll always get what you always got. And I hope this does spur some interest in you to think about how new technology can play out in your practice.

So I think I’ll pass this on to the team in Leipzig to kind of talk about some specific applications and sort of dig into this a little bit more. Guys?

Professor Lethaus: Hello. Thanks, Brad, for this impressive talk. I really enjoyed watching that. Really nice movie clips you’ve got in there. I’m a big fan of «Matrix» too, so I should have taken the blue pill, as we used to say. Hello to the world. Hello to everybody. Thank you very much for giving us this opportunity to give some of our small examples that we gathered together. This is my colleague, Dr. Bartella. My name is Bernd Lethaus. We are in Leipzig University in Germany. So we have 4:00 p.m. So it’s more a good afternoon from our point of view. And Dr. Bartella, he prepared a really nice short lecture about a case. And we also tried to put some scientific, yeah, evaluation on this topic to see, is it really a gain in what we do? Or is it just a nice toy? And I can assure that we think that this will put surgery, especially in the craniomaxillofacial part, on the next level. So I’m happy. You could start with a small lecture.

Dr. Bartella: Yes, hello, everybody, also from me. So we raised the question, is Mixed Reality the gap closer between virtual and augmented reality. Brad presented it before quite nicely. So there were some publications in the recent years about both, virtual and augmented reality. They both have their advantages and their disadvantages. And we raised the question, where is the role of Mixed Reality?

So technically, Mixed Reality makes it possible to screen, to display the three-dimensional dataset which is so far limited to two-dimensional screens, in the middle of the room and you can walk around. And our idea was to prepare a clinical case which is an oncological case with surgeons, but also with radiotherapists. And we discussed together, already in advance before we start the treatment, what our opportunities are.

So we had a patient, the initial diagnosis was a squamous cell carcinoma of the left maxilla. Afterwards, the pathologist revised it, and claimed that it was a Ameloblastoma. And you can see it here in the left maxilla, it’s painted in orange. It had a big extension. And the biggest question to raise was, is the orbital floor infiltrated? Because then we would have to remove the eye. So our plan was to resect the tumor, and to make fast sections of the periosteum of the orbital floor and check whether they are cancer cells or not, or whether they are cells from the Ameloblastoma or not.

Well I think I give it back to Professor Lethaus, he will say something about the surgical procedure that we prepared obviously in Mixed Reality.

Professor Lethaus: Yeah, well I think one of the biggest advantages with Mixed Reality is to get a better grasp of what will be the situation. What can you expect in this patient? I mean you can see this is quite a vast tumor this patient has. Originally, Dr. Bartella has said it already, we thought it was oral cell squamous carcinoma, not an Ameloblastoma. So quite malignant tumor. And we discussed broadly, in advance, what is the best approach to go there?

Obviously, if you could… Or you could do a [inaudible 00:21:17] to open the whole face, but this leaves not really nice scars. And sometimes you leave this hollow emptiness below the skin’s face. And then you get scar from it, and you get retractions and bad wound healing. So we discussed a lot, if we could maybe just a facial degloving and try to combine it with a nasal approach, a transantal approach. So we discussed this and that.

You can go in this patient… Can we look further? Now those are the resection margins we also put in there. So what is the actual volume that we have to remove? And then you can go in there and see which anatomical structures are to be removed? And also a second thing that we discussed is, how much do we have to remove from the mandibular. And you can see here that also the mandibular is something to be concerned about. And so we then decided that we could go in their early, and then just first take out some fast sections from the orbital floor to be absolutely sure that this is oral cell squamous. Which it was not, at the end. So we could leave the…we didn’t have to do an [inaudible 00:22:41].

And then we had chosen, I think a quite good approach. And then the second thing that we tried to integrate in there, is to get the radiotherapist into those discussions. Does it make sense? Can we operate this? What are your options to have a good regimen of therapy afterwards? So the three of us stood there in this room, as you have seen before with Brad, oncologist, surgeon and a radiotherapist, and discussed this case quite thoroughly. And I found this really…it gave something else, something in addition to this case. And we found it quite good. I think go on further.

Dr. Bartella: Yeah, so we started with the surgery. And here, you can see the resection of the tumor. And here, you can see something very interesting. We were not quite sure about three spots in the surgery, if we really resected the whole tumor and we put some tags in there. You can see in my right hand, it’s like a pen and they were for two reasons. First, we marked those spots also on the resection. But also the radiotherapist used the same software program. So if in case there would be one resection in the end, they could use a boost therapy for this very region.

And that’s what happened. Here, you can see that these are three spots we marked during the operation. And technically, in fact, the middle spot was positive, but we could remove further tissue to gain our zero resection.

So well this is the important thing which provides Mixed Reality for us, the combination and the interaction between several clinical fields in this particular case of surgery, radiotherapy and also pathology.

Professor Lethaus: I just want to mention or stress again the possibility to have the radiotherapist in the boat. Because we discussed this case afterwards, and we could show him exactly the points where we were not quite sure if we had reached a sufficient margin in this tumor resection. And this makes his life much more easy, as he told me, to see, okay, those are the points I have to concentrate on. Here, we need more radiation than in other cases. So he was able to put more radiation in those regions that we had marked and that we could clearly show him afterwards. And spare other tissue parts. And he said, «Okay, this makes my life easier and this is something we have to use regularly in patients with such huge tumors.»

Dr. Bartella: Yeah, so we were quite happy about it. But personal experience is the lowest level of evidence. So it was not the both of us alone, there were some guys with us in the room. There were three consultants of oromaxillofacial surgery, three residents, two radiotherapist and two students from dentistry. In total, 10 persons. And we asked them a couple of questions to rate them from zero being the worst rating and 10 being the best rating. And the satisfaction with Mixed Reality overall, was quite good. It was almost 10. We specifically asked for the occurrence of motion sickness. Because in virtual reality, this is what happens when multiple persons are in one virtual room and somebody’s moving around the dataset, all the others may feel motion sickness.

The auto segmentations of the maxilla and of the mandible and of the orbital floor of the cavity, as you could see before, it was really good in this case. We performed it with elements from Brainlab and it was satisfying to us. And the three-dimensional perception was overall, also very good.

Likewise, the same results we had with the displaying of the structures. I am not quite sure if you could see it before, but in fact, when you step in the skull, you can see the vessels of the tumor. And we could see the relation to the orbital floor, to the carotid artery, there we was…you know, we could pop in the safety margin and could see exactly how much millimeters in front of the carotid artery are we. And how fast can we go?

And we believe it’s improving our workflow, and it has the potential for a clinical implementation. Especially for the interdisciplinary approach with the radiotherapy, but also with the pathologist. And four of us had some experience with virtual reality. Just to remind you, virtual reality was the glasses you cannot see your outside world. So again, we just made what do you prefer mixed or virtual reality, the occurrence of motion sickness, as suggested before, was better or not available in the Mixed Reality population. The three-dimensional perception was likewise. There was no big difference. The displaying of structure was, due to the resolution, probably better in the Mixed Reality and the preoperative workflow. Because we could integrate it and the Brainlab navigation system was much better.

Likewise, the clinical implementation, especially because of the interaction with the other examiners in the session was rated better in the Mixed Reality glasses. And intraoperative usability could not be rated, because either was not used intraoperatively. And last but not least, we compared it also to the augmented reality. So motion sickness was not an issue in either glasses. Three-dimensional perception was preferred in Mixed Reality, and the displaying of structure of the improvement of the workflow, likewise.

So like, again, the potential for clinical implementation cannot be highlighted enough. I think this is what is the important point for us to integrate it, for example, in cancer boards or in preoperative discussion of patients.

As a conclusion for our talk, for our lecture, we believed Mixed Reality is a promising new technology that is able to display three-dimensional datasets. And it seems to have many benefits in comparison to ER and AR. I don’t wanna bore you with this long sentence, but this is the conclusion from our publication that just got accepted in the «Journal of Computerized Dentistry.» And I like to thank you for your attention, I’m happy to…or we are happy to answer your questions.

Professor Lethaus: And we are more than happy to give over to our colleagues from Heidelberg, Dr. Kühle.

Dr. Kühle: Hello, thank you very much for your introduction and your words. And thank you again for your very nice presentation. Especially also giving it a bit of an academic approach. We are presenting from Heidelberg, Germany here as well. And before we dig into Mixed Reality, I wanted to go a bit back to virtual planning. What is virtual planning for us? It’s basically learning on a model basis. And as you see, Watson and Crick on the right side. They, in 1951, put all the information that was there in the literature together, created a model, and of that model they decipher the DNA.

So how does that apply to us? It’s, basically for us, that all the slices of our CTs are little images that can be collected together to mirror the patient’s anatomy. We can thereby generate a digital twin of the patient in a program, like for instance, elements or other simulation software. We can mark pathologies. Or as Brad showed, plan complex reconstructions in trauma. We can create a navigation plan and maybe even use CAD/CAM guides to bring all that into theater.

How does that work on an exemplary basis? We, for instance, have a patient like the guys from Leipzig presented as well, with a malignancy of the upper left jaw. We’re able to use our imaging. We can segment structures, see the risk structures around. And then plan our resection according to our plan. Go into OR and to use facial degloving as well here, to have, if you can say so, micro invasive or a small invasive or at least aesthetical benefitting approach here.

And as the guys also mentioned already, the spatial data are really a game changer in oncology. Because you can’t just only take biopsies from a certain region where you’re not sure. You can basically do what is called data roundtrip. You can mark several borders around your tumor. And after pathology has rated them, adapt to your radiation plan. And I think especially reconstructive surgery in oncology, we have the opportunity that we can preserve our precious bone for reconstruction. Because usually, the radiotherapist focused the main radiation onto the construction. And thereby we have a clear situation where we can say, «This has been the cavity of the tumor. We resected that.» We could even do an entrope [SP] to get a hollow situation or a hollow of resection to avoid as such the radiation of all the construction. Or we can have a localized boost.

How does that look? In theater, as said, we use a transoral facial degloving approach here. And we were able to resect the tumor without too big of an aesthetical deficit for the patient. And then used a DCIA flap to give the patient a bony reconstruction for the dental implantation as well as the facial prominence. The very nice thing about navigation is in this case, that we don’t have to use [inaudible 00:34:11] to see if our reconstruction has worked. Because we can segment all that, we can the segment the tumor, we can segment the bony structures, and just go over and we’ll start and see if our reconstruction is a place we want it to be. So it’s also a radiation reducing.

And if we see the outcomes, we have an aesthetical and functional outcome, which is quite sufficient. You can see the maxilla is reconstructed by part of the DCIA flap. We have the facial prominence reconstructed. And that’s a young patient, four weeks after surgery with a quite well status, now getting implants at the moment.

So if you wanna sum all that up into one word what brings virtual planning and navigation to us, it gives us predictability. That is a very good in delicate surgery, because we don’t have any mistake on the patient. We can go into virtual planning, we try hard to reconstruct it and play a bit around and do all the mistakes we don’t wanna do on a patient, in that virtual planning surrounding. As you see, in that case, we had a planning. We were able to conduct that planning quite well to get a functional [inaudible 00:35:26].

So what do you need as a surrounding? You need planning software elements, as I said. Other possibilities, you need visualization in the operation theater or for the planning section, you need navigation to go into theater and bring that planning in. Or you get step up when you do this, and we just bought our new navigation system and several…three of those glasses, you can use the [inaudible 00:35:53] which has been presented here.

And in the first place, we got our glasses and we just wanted to first and then first problems come up, because it’s not too easy to implement those things into your system. It’s quite possible, but there’s a protection. And IT, our guys are not too happy to bring a device into their system very fast. If you wanna [inaudible 00:36:18]. So after a while of getting those things integrated, and to go through all the cases with Brainlab colleagues, and our boss was first laughing a bit at us as we were playing around a bit. But then he stepped in and put one of those glasses on, and I think he thought, «Wow, this technology is quite promising.» And it just gives a very immersive feeling to you that you’re able to stand around one of those plannings and you’re able to move it, to just see it from different directions.

And so after we were able to implement those glasses into our system, we tried our first case. I think we’ll just let the video roll for a few minutes, to just see how we interact and how that works out. And I’ll be quiet for a second.

Okay, so today, we’re talking about a patient with a tenderness of their joint cell tumor. There’s special focus here on the location of that tumor, which is in the TMJ, infiltrating the skull base as well as part of the [inaudible 00:37:34] here. We have segmented the relevant [inaudible 00:37:39] up front to just talk about where to do your colleagues have to talk about the operational approach. If you come closer here, then you see [inaudible 00:37:51] internal artery is directly on the posterior median side of that tumor. The vein as well is very close to that. And the main spatial nerve is also here.

So we thought of approaching it by resecting the [inaudible 00:38:14] and putting in a patient-specific implant to get this sorted here. The problem is that’s probably the most complex for you guys from neurosurgery. But we have a skull base infiltration very close to the trunk, very close to the internal carotid artery. So got a lot of precision work here that needs to be done to get all around here and to get it [inaudible 00:38:44].

We thought of maybe approaching it lateral approach, which would be the coronary approach here and a facelift approach, cut wise. Yeah, and I think the hardest thing will be to get the borders free. If you want to have a look at the CT scan here.

So after we had our first session, then we were able to simulate or to show where the tumor was, we went for planning of the surgery, and then we decided to go for a titanium implant for the skull base. So we construct that to a patient specific [inaudible 00:39:38] of the TMJ, which was individualized as well. Went into OR, and we were even able to get the approach as we planned to, we were able to [inaudible 00:39:52]. And just neurosurgery and back up to bring in that implant and to get the tumor well out, as said on the visualization.

So if you wanna have a look at what Mixed Reality can deliver for us, I think the collaborative planning for complex procedures is in the very focus at the moment. And I think especially in those cases where different approaches from ENT [SP] guys, from neurosurgeons come together this [inaudible 00:40:29] especially, we can discuss that in a nice round. We can give a very immersive feeling for the sense of trauma to see where we are, where we need to get. Orthognatic surgery I think will be a game changer as well. Because orthognatic patients, it’s always hard to see where they change.

For virtual planning, we have all the data there. It’s all there. We just need to implement that into the Brainlab system and to another system. To maybe give a bigger picture, a nicer picture for their change. What would it be good for? That’s the question. But this is limited by [inaudible 00:41:10] at the moment. I think super imposing resection plans or our plans to reconstruct will be the goal to go for. At the moment, the problem is that the resolution isn’t really good enough. We are not ever to register from the glasses, that’s technically possible. But without service recognition, we’re having problems in spatial precision. Object navigation is still possible.

So if you’re talking about patient-specific implants or [inaudible 00:41:44] I think our step stones need to be that we need to improve the precision and super imposition of information. And thereby we’ll hopefully walk into a future where neurosurgeons have the ability to [inaudible 00:41:59] information that we planned on, and have the opportunity to get better surgery for our patients, to get clearer borders, and to maybe achieve all our goals with better predictability.

So thanks again from Heidelberg. I hope you got a good picture of our using Mixed Reality. I’m very happy to answer questions with all the other guys, I guess.

Jenna: Heartfelt thanks, Professor Strong, Professor Lethaus, Dr. Bartella, also Dr. Kühle. It’s just amazing for us to see to which extent you are already using this Mixed Reality in your daily business for craniomaxillofacial surgery. We already received a lot of questions. And I will address them in a second. But I would like to, as a reminder, all participant, that you can still send questions. If you have any question, just type it into the chat function, and we will make sure to still address those within the webinar to the speakers. So Dr. Kühle, Dr. Bartella, Professor Strong, Professor Lethaus, we’ve got a few questions. And I think the first one goes specifically to the talk from Dr. Bartella. Doing the assessment of the Mixed Reality, did you control for the novelty effect? So MR is new and fancy, of course. Did it have an impact on the participants?

Dr. Bartella: Thank you for this very interesting question, which is, yeah, technically addressing the core of our data question. Well actually in this particular case, we planned it already preoperatively, together with the radiotherapist who joined the session. And we also discussed where we have to be very careful about with structures within. And obviously after the surgery, we met again with the guys, and it was a long discussion whether or not we radiate the patient and if we need the boost. And it was really a gain for the interdisciplinary approach. I hope this is addressing your question.

Jenna: I think so. Thank you for that answer. There’s another question that goes more to the topic of Dr. Kühle. What’s the main drawback of current Mixed Reality that you would like R&D to solve in the future?

Dr. Kühle: Well thank you for that question. I think as wishes we would like to have, and I think starting if we’re looking into using it in an operation theater is the registration and the super imposition and the quality of that. So that would be my greatest wish to have the possibility to have my information in a solution that allows me to be sure that I’m at the position I want to be. At the moment, we’re talking about five to six millimeters in super imposition, so we’re looking into developing [inaudible 00:45:08] devices, I think that is one thing that can help us overcome the insufficiencies in operation theater.

In explanatory situations or in student education, I think a bit of an opening towards integration of other planning software, or at least their data would be nice. As I said, automated surgery is a topic we are looking a bit into, and I would be very happy to use several forms of data to display and to interact a bit more with them. For instance, like an morphing or a movement of the segment to show that. So that’s basically the two things I would really like to see in the future.

Jenna: Thank you for that input. I think that goes perfectly in line with our Brainlab strategy. And I’m sure we will have more conversations with you, but also of course with Heidelberg and also with Professor Strong. There’s one more question. Feel free, whom of you want to answer. So the question is, how much time was spent in the preparation for preoperative planning? Do you feel this is practical for the majority of practice?

Professor Strong: I mean I can kick in. I think that workflows are improving all the time. So this is a relatively new technology. And any time you have a new technology, it will take more time to things set up. But as access gets greater, as workflows get easier, I think the potential is there for patient education. I mean that’s just a slam dunk that if we can show the patients a better representation of what we’re planning, they’ll be more comfortable and understand it better. Education wise, certainly. So I think it is just a matter of streamlining the workflows and at that point, it will become much more accessible for many different types of practices.

Dr. Kühle: And if I may just add one thing. Workflows can be optimized. For instance, we’re always doing navigation for complex skull base or mid-face oncology patients. And if you’ve got your planning software adhering to your patch system, then the import of the data is not a thing. It’s just there. You just mark the pathology. And if you’ve got a server that is connected to your navigation system, it’s just there on the navigation system. And if you want to log in, you just put in your Magic Leap glasses. So I think the interaction is reduced by the centralization of information transfer. And if you’re able to have a fast workflow, the Mixed Reality glasses are an extra work. You just display it in another way.

So the preparation for segmentation to do that in that minutes maybe is a thing of maybe a quarter hour or half an hour, if you put a bit of [inaudible 00:47:58] in it. But it’s not a long procedure.

Dr. Bartella: [crosstalk 00:48:04].

Jenna: Sorry.

Dr. Bartella: Sorry. Just to add a very last thing. In the past, we had iPlan. And actually no offense, it takes some time to mark the tumor and all those anatomical structures. But in Elements, the outer segmentation is running really good. So it was reducing our time I think from one and a half hours to maybe 10 minutes preparation. So this was a big step forward, in my opinion.

Jenna: Happy to hear that. Maybe for those participants that are not familiar with our planning software, so iPlan was our previous planning software. We now switched partially already to the Elements, which was what Dr. Bartella just mentioning, that we now changed the segmentation and improved the algorithm. So this goes much quicker for the preoperative planning. Dr. Kühle, there was also one question. So the segmentation that you showed in the video, can you give a rough estimation of how long it took you to prepare that case, to have it then shown in the Mixed Reality?

Dr. Kühle: I need to guess a bit, but the tumor as it’s shown in imaging with the contrast fluid, it is quite easy to do. Structures of the facial nerve trunk needs to be done a bit manually. So that may take up to maybe 10 minutes for the facial trunk. With good CT, there’s automated segmentations, for instance, the arteries. I guess the whole segmentation may have taken maybe half an hour, maybe three quarters of an hour. But mainly due to after segmenting of the small things like the facial trunk, as said, tumor usually done fast. If you’ve got contrast fluid in the arteries, that is done very fast as well. And the useful feature of the Element that there’s automated segmentation of the anatomic structures just gives you that with a click. So it’s just the structures we need to see that are not in the general [inaudible 00:50:12] you need to do, I guess maybe some [inaudible 00:50:15].

Jenna: Thanks a lot. Professor Strong, there’s a question to you. Any insights on how the Mixed Reality displace are compared to regular stereoscopic 3D screens?

Professor Strong: The thing that jumps out at you most I think when you wear the glasses is your ability to walk around and walk into the objects. So you and the other colleagues, as you saw in the video, it looks sort of funny when you don’t have the object in front of you. But when you’re in the environment, you can walk into it and literally slice in and look at it from different angles. So I think it’s much realistic. You can move and interact in the environment. So there’s much more utility than just projecting on stereoscopic screens. So it’s a full jump up from what we think about in the past with stereoscopic screens.

Jenna: Thank you. Another question into that direction. So you’re all working in university hospitals. Do you see that this technology will change the medical student’s life?

Professor Lethaus: Absolutely, absolutely. This will be a game changer. Because you can teach things like in real life. As Brad said, you can walk through the patient. You can put different angles on. And you can let the students get a totally different vision. They get a different vision from what you try to teach to them. And it’s much more easy if you use that tool. So I think… Imagine you have like 30 pupils there. Everybody has this in one room, and everybody has these goggles on. I mean you have to be careful that they don’t run into each other. Because everybody is so excited about the image. But it would be great, yeah. We’d have to hire gym halls to do it.

Dr. Kühle: Yeah, [inaudible 00:52:29] I think. In a simulated education, practical education is sometimes very inadequate. So some students see a lot of that kind of procedure like orthognatic surgery, some see a lot of oncology. But some of them are just standing there holding two books and don’t see anything of surgery. I think the immersion we get with the Mixed Reality glasses, maybe gives them the possibility have a more level, more integrated experience and so can deliver some experience we cannot do in real life. Especially corona times, maybe you can do online teaching with showing them things that happened in the OR. So I think it’s a very promising technology as well.

Jenna: Thank you.

Professor Strong: Yeah, I mean I would agree that like that video of the 2D CT that I showed, when we represent these things in three dimensions, everyone has a different ability to sort of put together two dimensional data in their mind, and different…you know, with practice it gets better. But from an educational standpoint, to have it compiled and in front of you in three dimension is such a powerful tool. And I think it just accelerates the learning curve very rapidly.

Professor Lethaus: We had a good before this meeting, all four of us, that actually it’s so difficult to show you the power of this tool. You have to have it on your own head, and you have to see through it. Because it’s like opening a different door. Now we’re trying to…we already really excited about it. And I can imagine that some of you will say, «Well what are they talking about? I mean nice pictures.» But really, we are giving this small view through a keyhole in a totally different world. And it’s difficult to show you what this can do. And you seriously have to really have it on your own head to see what we mean.

Professor Strong: Agree.

Jenna: Thank you for that. If there’s now any participant thinking about he wants to use the Mixed Reality, feel free to either write us in the chat or write an email to [email protected]. We will make sure that you can also try out the Mixed Reality glasses. They are easy so we can travel with them, or they can be sent to you so that you can try them out. There’s some questions and I think the next question is a more critical question. So what are the main obstacles for using Mixed Reality, from your perspective?

Dr. Kühle: Maybe as we encountered some integrating that technology, I think if you really want to have them in your system as your navigation system, there are a lot of protocols that need to be okay with the data protection, with the IT guys. You need to have a surrounding to have [inaudible 00:55:21]. You need to have a bit of space around them. So it’s not a thing you just have in your pocket, put on and you just use it while sitting at your desk. So you need a bit of a surrounding to use them to the full extent. And I think that is something you have to have, because if not, you could not use it as it is. It’s basically a bit like the Holodeck in «Star Trek.» You can walk around, you can see things, and that brings the full immersion. Just sitting around with the glasses on your head, you will probably not deliver that.

Professor Strong: I mean like any other physical object, it needs to be where you wanna be. And you need to have enough of them if you’re gonna be in a large room with multiple people. And I think right now, cost is certainly a factor. I mean you can’t deny that. And so as things get streamlined, cost is gonna come down. It’s gonna be easier for us to have multiple of these units. So that certainly is a factor, and I only see that getting better as we move forward.

Jenna: Thank you. We received one question. I’m not sure if that is actually a question to me or to one of you. So the question is, how did Brainlab make the rendering so hyperrealistic? And I think that also goes into the direction, what Dr. Bartella said about the previous software version and the new software version. So for that, I think it’s really important to know that we have an automatic segmentation tool. So within a few seconds, you can put in your dataset from any patient, and then you get a patient-specific segmentation for every skull bone of this patient. And this happens within seconds.

And I think Dr. Bartella also mentioned that. So this goes really quick and fast. And since we have a new collaboration with the company called Level Ex, they are now part of the Brainlab family, this is even more important. Because this also changed the way how we can visualize, for example the tumors, for example the bony structures. And I think Dr. Bartella, maybe you I think realized mostly the difference between the two software versions. Maybe you can tell something on that to the participants.

Dr. Bartella: Well we started a couple of years with Brainlab, and there was this online thing called iPlan, where you had to log in. And it was taking quite a long time, especially when you begin to mark, because you had to scroll through all the slides and adjust the margins of the tumor, of the maxilla, of the mandible, whatsoever you want it to display. And now there’s this new program called Elements. And you have suggestions that are popping up like mandible, maxilla left, orbital left. And you just press those buttons. And sometimes have to adjust a little bit, but more because due to the different density and high and low vascular raised areas. There might be some minor changes necessary. But in total, it literally goes within minutes. And this is a big advantage I think.

And yeah, I think this also allows this precise displaying of the structures, because it automatically runs through the margins. And that’s why you can see so sharply the vessels on the tumor.

Jenna: Perfect. Yeah, I think what you also realized is that new Level Ex technology that we have now really improves also how bone and tumors look like. So in the past, the tumor was like a colored bubble. When now it really shows the structure of the tumor, than instead of just having that bowl. And I think this is really improvement in that you can also of course then in Mixed Reality. I think we received a few questions on a very similar topic. And the question here is, would you the technology for patient education? If you would use it for patient education, how would you integrate it into your workflow? And do you think patient could benefit from that and better understand their diagnosis and their treatment? And I guess this is a question to all of you.

Professor Strong: This is a slam dunk, and it’s yes. I think from a starting standpoint, you could have one set of goggles and have patients that you wanted to work with them, go into that room. Ideally, you’d have one in all the rooms. But from a patient education, medical, legal, the whole just understanding the process. And if patients are more comfortable with you and the way you’re describing things, you know, surgical success is related to expectations in a lot of cases. The patient understands what success is and what the plan is, this is gonna be a very powerful tool. I’m convinced that this will be one of the earlier applications, at least in the U.S., that I think we will be using this for.

Dr. Kühle: Yeah, for instance, if you look into success of operations, it’s mostly survivable, how precise that surgery been. But more and more often, we see patient reported outcomes as a measure of how success you are, or how the patient will see that. And for the patient, there’s always a problem to understand, what’s really happening? What’s the problem? If you’re talking about a mid-face tumor, that’s not visible for the patient. It’s hard to grasp. The same thing for augmented surgery. And I think if we want to improve patient reported outcomes, and not calculate how precise our reconstructions have been without [inaudible 01:01:23], we can open the black box to them. Maybe give them an understanding of themselves and their anatomy and their problems. And I think it will improve because the patient will more know what awaits him and where that problems. So a clear yes from me, and I’m very happy with the slam dunk definition, yeah.

Dr. Bartella: Actually I see it a little bit more critical. I believe in automatic surgery, as you mentioned before. I think it’s brilliant. And also we are planning to do it with cleft patients and to try to describe the parents better what we are planning to do and what is the problem. But especially in these mid-facial tumors when they are close to the orbit, we are not entirely sure how much to teach the patients. Although in smaller tumors, obviously it’s really good to tell them we have to remove it. But as soon as it goes to the skull base or to the orbit. In our case, we didn’t present it to the patient. But the potential is there, especially for the non-oncological cases, in my opinion.

Professor Lethaus: On the other hand, I mean it’s got quite potential for the informed consent, because this is also sometimes an issue. I mean you can clearly show the patient what is on the stake, I mean what can you expect you, what are the dangerous areas we are close to or approaching. And this gives them better understanding about the operation. Maybe we have to be willing to have the patient to say then, «Okay, then I don’t do it then. Because this is really dangerous. I understand that it’s dangerous.» But we have to look. But at least it’s an honest informed consent, so maybe we have to be open to see what will happen. We lift the education from the patient on a much higher level. Sometimes it’s really difficult to tell them what we mean. Because in our minds, it’s clear. We do it every day. It’s our daily business. But in their imagination, it’s really difficult. So everything which helps to get the patient on the same eye level, I think is a good time and it’s a good tool.

Dr. Kühle: That’s a very good point. Because language, as we specialists talk about things, is a thing that not every patient can relate to. And so images do not lie. Images give you a real picture. I guess you’re very right in that point.

Professor Strong: I think it levels the playing field, really. So yeah.

Jenna: Okay. Thanks a lot. I think patient education is a very interesting topic. And I’m happy to talk with you maybe in six months from now to see how your first results have been patient education and what do you think how Mixed Reality can be used for that. And I think we received the very last question that I would still like to phrase. And the question is, so when can we Mixed Reality for image-guided surgery? And I think of course to Brainlab, when we will offer that to the market. And for sure, we will have another webinar on that time that this will be disclosed. But maybe it will be interesting to know your opinion, if you see that this will be beneficial for you as a surgeon.

Professor Lethaus: Absolutely. That’s a real slam dunk. Absolutely, I mean imagine this, a nice picture with the eyeglass learning from the airplane. I mean you have this guidance, and go a little bit to the left. And then be careful. We know this from aviation that this will help us. I mean this is proven already that if you push the limit a little bit more to augmented reality, this will make us better surgeons. I’m absolutely convinced of that, yeah.

Professor Strong: Reinald, do you wanna comment on your registration? And certainly those hurdles?

Dr. Kühle: Yeah, I kind was [inaudible 01:05:24]. I think as navigation had it probably in the beginning, precision is a thing. I think we need to workarounds to have the registration better. We see a lot of deep learning and neural networks being able to process picture in a very fast way. And I think we probably need to have a lot of that calculation power, talking of AI into recognition of what we actually see at the moment. Because using those three little bubbles there to look at a patient, it just gives us the bone. But it doesn’t give us a surface. If we just, for instance, change the operation [inaudible 01:06:04] we don’t have registration anymore on that [inaudible 01:06:07]. So I guess we either need a lot of AI or raised federalization power to get that sorted more. Or we need a workaround with project devices and stuff like that. So I guess we’re looking into at least one or two years of development. We started a project now with the University of Munich to develop those devices but it’s still in a very early stage. So I hope we’ll maybe see that in one or two years, at least for several applications.

Professor Strong: And I think the potential exists for being able to visualize things without incisions or with very small incisions. Because you can determine where things are, you know, seeing through the soft tissue. So the potential is huge. I do think it’s the next step past the education and presurgical planning, and these things that are low-hanging fruit. And then that’s sort of I think our next step is to be able to move into the operating room and actually accurately register our data and minimize perhaps our incision size. And work through some very small incisions to access fractures, tumors, whatever. That we can just visualize through the skin without making larger incisions, potentially.

Jenna: Thank you so much for that input. I think our R&D team will also watch that session, and I’m happy to open anything that we can disclose soon. Then we will have another webinar on that. Again, thank you so much for that impactful webinar. And thoughtful discussion. We now finished the webinar, so there are no questions left. If there is any question coming still through your mind, I’m happy if you send us an email to [email protected]. We hope you all enjoyed the webinar. And we’d like to remind you that we will also soon have our next webinar on the 29 of September with Dr. Zimmerer from Leipzig. So he just recently changed to that team and he will talk about TMJ navigation.

If you are curious on our webinars and news from Brainlab, please follow us on our social media channels. And drop us an email to [email protected]. Professor Strong, Professor Lethaus, Dr. Kühle, Dr. Bartella, thanks a lot again for all your preparation, for putting that together. I think that was a wonderful webinar and I’m looking forward to the next one. Everyone, stay safe and healthy. Thank you and goodbye.

Group: Bye.

Seminarios web relacionados

microscope-webinar-brainlab
Augmented RealityNeurocirugía+ 1
Augmented Reality-Assisted Surgery – The Benefits of Microscope Navigation in Neurosurgery

Brainlab invites you to join our live webinar, “Augmented Reality-Assisted Surgery – The Benefits of Microscope Navigation in Neurosurgery” on February 10, 2021 at 4:00 PM CET. Presented by Margrét Jensdóttir, MD (Karolinska University Hospital, Stockholm, Sweden), Sabino Luzzi, MD, …

Margrét Jensdóttir, MD
Christian Raftopoulos, MD, PhD
Sabino Luzzi, MD, PhD
02Blurred_lights
Augmented RealityNeurocirugía
SEEG Planning Integrating Automatic Anatomical Segmentation and Augmented Reality

Brainlab invites you to join our live webinar, SEEG Planning Integrating Automatic Anatomical Segmentation and Augmented Reality, on September 14, 2020 at 2:00 PM (CET). Presented by Dr. Peter Reinacher, Neurosurgical Consultant of the Department of Stereotactic and Functional Neurosurgery …

Dr. Peter Reinacher

Ver más seminarios web próximos

Registrarse ahora