Webinar

Talking Innovation – Today’s technology and what’s coming in the future

Topics
数字化手术室
Digitalization
Operating Room Integration
语言

English

共享此页

描述

Technology continues to rapidly evolve. With this, we ask: How is today’s technology impacting healthcare? And how do we see innovation shaping its future? Join our speakers as they share not only their answers to these questions, but their expectations for future innovations.

演讲人

Simon Enzinger
Simon Enzinger

MD, DMD, CMF Surgeon, Landeskrankenhaus Salzburg

Woosik Chung
Woosik Chung

MD, Spine Surgeon, Presbyterian St. Luke’s

Jennifer Esposito
Jennifer Esposito

Vice President, Health, Magic Leap

Auke Meppelink
Auke Meppelink

Director Digital Health Technologies, Brainlab

主持人

Matthias Eimer
Matthias Eimer

Brainlab

视频脚本

Matthias: Hi. Thank you, everyone, for tuning in again. And welcome back to the third session of our digital symposium, the online version this year. And the last part is also really exciting, one where we do a deep dive, looking into certain digital key technologies that will shape the future of digital healthcare.

And therefore, I’m happy to have on board with me two industry experts and also two users who really push the barrier in regards of utilization of digital technology in the operating room further. And the first expert that we have on board with us is Auke Meppelink. He’s the Director for Digital Health Technologies here at Brainlab. He is therefore leading the global marketing organization, and this includes also cloud technologies, the ENT product portfolio, also CMF and generally surgeries, or overseeing a really wide set of different product areas.

And, Auke has been with Brainlab for almost two decades, and he has always been a very special visionary. I’m not only saying that because he is my boss, because he has achieved something really great, and that is pushing and selecting and pushing certain key technologies into final implementation. So he has, for example, been the first one to drive projects that are based on complete video over IP infrastructure already back then in 2012, in the Netherlands. And I remember very well the final pictures of the royal family from the inauguration. So that was really a huge success back then.

So, Auke, thank you very much for being with us. And greetings to Amsterdam.

Auke: Thanks, Matthias, for this nice introduction, and all those nice words. And I guess that on Monday we should talk because there’s certainly something that you want from me.

So, I would like to talk to you about something that I call information-guided surgery. So, I’m really excited to talk here. And a lot of topics I wanted to address, but of course, I have to limit to a few, and therefore I selected three main topics, so, being the data access device communication and visualization and interaction. I think that these are three key topics that keep on coming back into many projects that each of you are probably running into in the hospitals. And I think these are all extremely actual, and they also really fit well together.

So let’s start actually with data access. So as data on patients is increasing in the hospitals, there’s also an increased need to structure those data. And we’re seeing that today that in the hospitals that PACS systems with the patient images are getting fuller and fuller. And especially if a patient has been coming back several times, then there is very much data available for that patient, and it becomes increasingly difficult for the medical care professional to then select the data that matters most for the procedure that he or she wants to plan for that patient.

And it’s the same also with the hospital information systems, there is more data available. And every time the patient comes back to the hospital, there’s again something new documented. So what we at Brainlab focus at is making sure that out of those piles of data that are there, that we create nice libraries, and that we present that data in a sensible format. And also, we make sure that in the background, that we really analyze that data with smart AI algorithms. We for instance, can automatically segment certain structures of the body and present that, again, in a sensible format to the surgeon, and helping to make the right decisions for the treatment.

But also what we’re able to do, and this is a key element of it, sharing then that information with other platforms, other technologies. Let that be a navigation system, let that be at the OR where we have a large display, where the information is being shared with the surgical team, or maybe even in the cloud and sharing it with colleagues for consult.

So basically, there’s a whole lot happening underneath the surface for structuring data and preparing data and sending it further in the organization. And I always like to make the comparison to an iceberg, where for the users only the tip of the iceberg should be visible, and that’s where they should worry about. And we as medical device manufacturers need to make sure that that experience is the best possible.

But underneath the surface, there’s actually a whole lot happening. And for that reason, we’re always very happy to have such good collaborations with many IT departments of different hospitals around the globe. So, I wanna touch base shortly on the part of the iceberg underneath the surface. And as an example, I just wanted to show you how data is actually floating under that surface, and how we are preparing information for a surgery.

So as an example, here, I have an overview, where you’re seeing workflow inflammation, radiological inflammation, and video information floating from pre-op to post op, and clearly intra op throughout the hospital. So, I think this is a typical setup where you have in a hospital an EMR or a HIS system, a PACS, a vendor neutral archive. And nowadays, clearly also cloud services. And in a workflow, typically, a physician would review patient data before the surgery, whether that is a scan or video data. And intraoperatively he or she would get the information presented in the best possible way, close to the clinical, to the surgical field.

And clearly postoperatively, then there is also, again, a step where whatever that has been created during a surgery can be reviewed and can maybe be shared or can be stored. We’ve developed for that purpose a solution that we call the patient data gateway. And this basically allows to get from the hospital information system the messages on the workflow, meaning, who is the patient that’s going to be operated on which OR? And we can then offer that information to the PACS, and then the PACS gives us back all the data that is that is available of those respective patients. We process that data, we analyze it and we enrich it. But we then make it available again in a workstation in the office.

So if a surgeon knows that tomorrow he’s gonna operate a certain patient, then overnight we download the data, and we make it available such that a surgeon from any PC in the hospital network can review the scans or the data that is there. Automatically structures are segmented before going into surgery, making really the best possible surgical preparation over there possible.

And for the OR, we automatically pre-load already old data that is there from the patient, and clearly all worklists. So, OR 1 only gets the patients that are for that day operated on OR, and we do the same for OR 2, etc. So there’s no manual interaction, it’s not needed that somebody is manually entering which patients are going to be operated in which ORs. You know, we go for much safety, and therefore have a streamlined process where there is no manual interaction needed. And the only thing is that you have a validation, making sure to double-check that that is indeed the right patient that we have there on the table.

Clearly, our pre-fetching of data is happening in the background. So automatically when the surgical team enters the OR, the data is just there. Nobody needs to think about how to manipulate it. It’s just like watching TV, you turn it on and you have it. And then during a surgery, as a surgeon creating screenshots or videos, and clearly those need to go somewhere as well, those go back to the intermediate storage, and then afterwards find their way either to a workstation where a surgeon can review it and edit and make smaller video clips out of the recording stuff we made.

And of course, this information can then go back into the vendor neutral archive or PACS2 system, whatever the hospital has, and finds its way to the long-term storage. So this shows how we today are already able to manage very complicated data flows, ensuring that the surgical team is just getting the data at hand and don’t even need to worry about it. But when we look further, and we think about tomorrow, and some of you might have heard already some of it this morning in our speech of our CEO, is that we’re making a next step, where we’re not only thinking about anatomical data, or maybe data that is available on the patient in the hospital information system, but we’re also thinking about spatial data, about the video data.

And when I talk about video data, I don’t only mean maybe from an endoscope, but also of an overview camera or maybe any other video that is there. Workflow data is clearly important, and also statistical data, such that you really can also follow up on the efficacy of a procedure and make sure that you improve for the future, and that you can compare it also to the peer group.

So, when we think about that, about that data, then there’s a larger complexity, and the information gets all interconnected. And I think this interconnection of data is well represented here by the slide, where you’re seeing how from anatomical data with automatically segmenting information out of a single patient scan, you know, we go to the spatial domain, where we are tracking in the OR instruments relatively to those scans of the patient.

But also, the spatial domain means the room itself, we’re using that room and documenting for instance whatever happens in that room, throughout the procedure, and then again matching that with other steps in the data collection, including a video domain where we can record videos, but we can also automatically have in the background some machine learning algorithms detecting certain anatomical structures. Or when we go towards workflow, it even understands when we bring in a section, for instance, in which part of the procedure we are. And it can make milestones, and it can document that.

So we are really going towards an era where we are now starting to connect the dots, and the data collection and the data processing is much more than what we have today. And in order to enable that, it’s also extremely important that there’s a good device communication. That brings me to the second topic.

So with device communication, I mean, having a central access point to all devices, and making sure from one device you can connect to the other. And if we don’t think of our private environment, where we all have such a central access points, our information hub at home is typically our phone. And by connecting to the Wi-Fi or the 4G network, we’re able to do you know a whole wealth of things, from online banking, to news, to watching Netflix, to audio, to basically anything that we’re interested, we can control that from a single device.

And that is exactly what we do for our digitalization today, and that we want to do even better in the future. And if I might illustrate that again, at home, you know, we’re thinking at my home for instance, I deal on a daily basis with four different passwords. There are different hardware devices I deal with, with different operating systems and different hardware interfaces.

So, you know, at home I’m used to that, but we’re seeing already with the digitalization at home that at least the hardware side of things is getting a bit less. But when we replicate that and look at the OR environment, we’re seeing that there’s still a way to go, you know, we have tons of different displays, different resolutions, different sizes. We’re seeing here on the image also different cabling, different interfaces on those.

And if we go another example, for instance, we have different video sources, whether it’s an endoscope or microscope, they all come again with their different resolutions and connectors. But often, we are also nowadays seeing, of course, different imaging modalities for X-ray imaging, whether it’s a hybrid setting like here, or with robotic imaging device loopbacks like Brainlab has.

But they all come with their user interface, with the connections. And of course, if we wanna achieve to be able to really flow data from one to another, there are a lot of steps to make. And not only because we want a good interconnectivity, but also if you look at literature, and you’re seeing, you know, the different causes of technical failure in an OR environment, we’re seeing, actually that 80% of that is avoidable. And I assume that for many of you, that’s probably a very well-known and common number that you see in daily practice. And much of that actually is coming from handling, manipulating and maintenance and integration and networking of systems. Meaning, connectivity and making sure that it’s easy to use.

So for that reason, we also believe that it’s extremely important that not only we as Brainlab, but also as an industry that we’re getting towards open standards. And hospitals are moving more and more towards, you know, sort of an IoT, an Internet of Things solution, meaning that different devices of different vendors need to be able to communicate with each other if we want to really leverage the data.

And for that reason, also our CEO this morning, he mentioned, for instance, HL7 FHIR is one of the key standards that we really believe in. The thing is, it’s increasingly important in this complex environments that we have with more robots coming into the ORs, more imaging devices, we just need to make sure that in not too much time from now that we can still manage that, that we can still communicate with each other. Because then we can also really leverage the information.

And ultimately, we therefore we wanna to create an environment like this. We see here robotics, we see navigation, we see Zeiss microscope, machine or a lamp with a camera in there. At the end, we want the different technologies to seamlessly work with each other. And making sure that we document everything right, but also that we trigger events at the right moment. And there’s a lot of opportunity there, and we have to make sure therefore that there are no more silos. And again, that’s what we at Brainlab are spending a lot of our time at, and we are a strong advocate for that.

Which brings me then also to the third and final topic being visualization and interaction. So, we’ve seen that over time the way of going from film to full touchscreen and high resolution displays has evolved really quickly in the OR environment. And nowadays, we’re even seeing 3D screens, and here an example of a DICOM viewer. But we’re also making the next steps already, where we’re bringing mixed reality to the OR. And I’m extremely happy also that Jennifer Esposito from Magic Leap is one of the speakers in this round. Because it’s absolutely incredible what they’re doing at that company. And we’re very pleased to be partnering with them, and being able to bring mixed reality technology to the OR, where you can basically create virtual displays, and your wall is your display.

So you can make the information as big as you want, and you can really immerse in the data. But it also offers many opportunities, for instance, for teaching, for education purposes, where you can really look through the body and explain, for instance, residents, different steps that are coming up and so forth. For educational purposes, this brings a lot. And also for planning the most perfect surgical approach, for instance, for craniotomy, in this case, that could really benefit from the mixed reality approach. You see through the body, you see what is behind, which structures are out there that you can circumvent. And clearly the information is projected in the room, with sensors automatically detect certain instruments. It’s incredible the steps forward that we are making there. I said Jennifer will touch further on that.

Last topic that I wanna touch, and that is also really important with respect to the visualization and interaction is of course also related to what we do with video material, for instance, coming from endoscopes on microscopes. Also here we’ve seen from the 1950s, where we had the very first flexible endoscopes with fiber optics to the ’90s, where we’re starting to work further with working from this place, and, you know, refining further technology to today, where we’re nowadays talking about 4k and a chip on the tip and different kind of technologies.

You know, we are ready now to make the next step forward. And that is, of course, very much also about automatically understanding, you know, where we are in the procedure, but also detecting landmarks and bookmarking it. And if we are done in a certain face of the procedure, and we bring in our instrument, that it automatically documents those. And that it’s automatically also maybe can trigger other devices to respond to it, making it not only a very good document, but most importantly, of course, a very safe procedure and very efficient procedure.

So yeah, thank you very much, Matthias, for giving me this opportunity to talk about the way towards information-guided surgery. And this has brought me to the end of my presentation.

Matthias: For that presentation, really interesting. There were a couple of questions coming in, and so many people are trying to strike up a conversation with you. I would like to pick one of those, because I find it really interesting. That is, and the other ones for sure can ask you later directly in the chat I guess.

But that one question from Sam is really nice. So Sam is asking, “I really liked your iceberg comparison. So I’m a resident surgeon, but sometimes when talking to our medical IT department, I feel like being the captain of the Titanic. So do you have any recommendation based on your experience to improve the collaboration?”

Auke: Yeah, I think that this is probably a situation that many of your colleagues run into as well. I think that what we have of course seen is that, over time, a lot of things have changed in hospitals, where IT has of course taken a more prominent role. And IT has also become a consultant to many other departments on what is all possible for their procedures, and that’s, of course, going to change in the future, where IT, of course, is also nowadays already heavily involved in what’s happening in the OR environments.

So I think what is really important, of course, is a very good preparation. I think that at the end, we need to distinguish between everything from the clinical side. That is, you know, what does the surgeon want to achieve? First is, what are the goals of the IT department? And we of course, need to make sure that there is a good balance there.

So I think yeah, what can you do? I think my recommendation would be to make sure that, first of all, that IT is early involved in many of the discussions. And you know, if you bring IT in later then typically that’s also not good. And many good ideas and initiatives, typically now that’s happening anyhow in IT depends. I would say, I’m an advocate, of course, also, therefore, for an early involvement. But I also think that it’s extremely important that different companies that provide solutions toward surgeons also make sure that they provide the right information to the surgeon for the IT again. So I think, from my experience, if the preparation is done well, that saves a lot of hassle afterwards.

Matthias: Okay, thank you very much for your presentation, and also for answering that question.

Auke: A pleasure.

Matthias: I would like now to move on to the second presenter, which is now in contrast to the industry expert, Auke, a user perspective. So, I’m happy to introduce to you Dr. Simon Enzinger. Simon is a medical doctor from Austria. He is a CMF surgeon, and currently working as a managing senior physician in the Landeskrankenhaus of Salzburg. And he will talk to us about the virtual planning as part of the patient journey. So, Simon, hello to Austria.

Dr. Enzinger: : So, hello, and thank you for the opportunity for this talk. And I want also thank Brainlab for some of my pictures. I want to share just this… Okay, so this should be working right now. So here we go.

Just short directs [SP] about me. I’m working in this clinic in Salzburg now since 2007, so quite a time. And my specializations there are the reconstructive surgery, microsurgery as well as tumor surgery, traumatology, and there also for technical innovations.

We do have…just as a second here. Sorry for that, I lost my sight. So here we go again. We do have a corporation in our clinic with us, a lot of other departments. There is once, there is a center of microsurgery and reconstruction since 2012 under the head of the CMF surgery. We combine there the, and all the other departments, you can see like ENT or the trauma surgery and neurosurgery. We also do have a core facility since 2017, in combination with this CMF department, the ENT department and trauma surgery department. We share equipment and we exchange ideas.

It’s about devices like cone beam CT, navigation system, the 3D printers, and most of it about software products. So what do you need for original planning? I want to explain here a little bit the principle. So we do have images. We need images, otherwise we can’t do any virtual planning. And we do have an image processing part, and this is the part where we need some help from the industry as well as for the third part. The third part are 3D printers, so there are implant productions, and of course, the intraoperative navigation system or other devices which are used in the OR.

So, how do we get the patient into the computer? First of all, three images, there are really a huge variety of images which are made out of CT scans, MRI scans, PT scans, or per connebeam CTs, but also 3D images and model scans are used. And the problem is that we have to create a 3D model out of all these images. So we need software, we need a lot of software. And this software is usually the key point. And there are a lot of different interfaces which are get use to…and we have to deal with. There are a lot of different data sets or data types we have to deal with.

For example, the DICOM datas are produced by the imaging systems. The STL datas are produced by the image processing. And after that we need, we get a G-Code out of it, or, which is to transfer this, our wishes to the CNC machines or even to the printers.

This all is done in the image processing. So, we take all this data. And there is another picture from Brainlab that I want to thank for. We take all these pictures and get one thing out of it, and this is the 3D model. So, the most important thing about this part is that the surgeon or the clinical staff doesn’t want to take hours and hours to produce these kind of images or models. And this is another picture where as you should can see the different types, and this was submitted automatically. And that is one part that helps me as a surgeon to spare time. And this was, this has to be transferred, again, to the hardware, like Brainlab navigation systems, like 3D printers, or even implant manufacturers.

So now we know what we need to do. And I want to present you three cases. With these cases, I show you step by step the increasing, the importance of the virtual reality. The first case was a 19-year-old female patient, which had orbital floor fracture in the medial orbital fracture. The orbital floor fracture was surgical treated, the medial wall fracture wasn’t.

But six months after surgery, the patient came back with a double vision and a three-millimeter enophthalmos. So, when I was allowing for the back, our plan was to make a CT scan, analyze the defect and find out what volume which we’re dealing with. After that, we want to go to the OR and insert some rib cartilage in the right position to correct this enophthalmos. So, here we see the virtual reality pictures. We took the right eye and mirrored it to the left side, and so we could easily point out the defect which is marked in red.

So we can even see the defect better here in this picture. And we can see the volume of this defect. So it’s not a big volumes, and therefore I know for the operation that the part of the rip I have to take out is quite small. And these are the post-operative slides. The left one is before correction, the right one is after it. So you see that the eyeball is coming out in the correct position.

The second case was a little bit more difficult. It was an 18-year-old patient which has no right eye from birth. So there was the right middle face was too small at all. And the position of the left or right orbit was much too low. Our plan was to make a CT scan to make all the planning. After that, we want to make virtual planning then a model operation.

And then we want to take all this information to the OR, make the operation, did with the navigation system to confirm the position of the individual parts. So we want to perform a box osteotomy, we want to perform the lid band fixation, also the augmentation of the zygomatic arch, and secondary lower lead eye correction was made later on, as well as a straightening of the nasal skeleton. These have been the preoperative scans, as you can see, the right eye is much lower, and it’s…so if you want to call it, it’s an eye catcher. So if he’s going out with friends and so on, everybody will notice that there is something different, that there is something wrong with his face.

So, we did all this planning. We took the left side, which was good in size and position, and we mirrored it to the right side. And there you can see how much the right eye was too low. And this is the way that we need this…this is the kind of information we need to go to have before the operation if it is even possible to operate this guy, or if it is too much that we have to do that the results aren’t that good to come.

So, we took all this information out of the computer and made an operation on the model. We tried to do it as realistic as possible. So, we took even…we can decide that we take this bone of the upper eye to augment it in the lower part and so on. And after, this was the post-operative picture. On the right hand side you see a young man which very can tell okay. He’s fine, there’s maybe something different, but it’s obviously not that difficult. It’s very difficult to see what is different there.

So, the third one is quite difficult case. It was a difficult case for use in surgery, because it was a huge meningioma, which is a benign tumor. But the size of the tool was so big that the patient got really troubles with the eye, especially with the left eye, and she was starting to lose vision. So we decided to remove this meningioma.

In this slide, you see the size of the meningioma. It was quite really huge, and it was also into the inner part of the skull. In the middle picture you see that from the back and then the right hand picture you see it from below. So, we planned to remove all the meningioma parts and reconstructed with a combined titanium implants, with and a peek implant. We put this all into virtual reality, we made the resection in the virtual reality, and we plant the implants. And these implants have been manufactured in a famous company.

The implants, especially the titanium implants are quite visible, and the peak implant in the middle is also visible. The rest, the white implants aren’t implant at all, they’re just cutting guides to get everything in place. These information were then transferred to the OR. And this is a picture of OR. It’s quite small, and we do have sometimes a little bit of a troubles with getting in all the stuff, but we can operate there, that’s the most important thing. And here you can see how to control the correct position of the cutting guides. And here, you can also see that if there weren’t any navigation, we couldn’t be sure that we are in this position where we want to be. And the positioning of the implants would be much harder.

So we just controlled the positions, and after that, the implants just were lying imperfectly. And here you can see the post-operative CT scan, with all the implants in, and the whole operation was a big success. So this is what we did already.

And what is coming up now. And I’m really looking forward for using the LOOP-X, the intraoperative cone beam CT from Brainlab, which is coming up. I was planning to show you the first clinical slides, but due to the pandemic we haven’t been able to do some live pictures. So they’re…even the pictures now are from Brainlab. And this is the really nice view of the huge gentry. Another one is that it’s really light and the scanners are really huge. So we decided to ask for the X LOOP to this comments. So, every…we’ve got very, very good images with a fraction of the radiation. Due to that, it is cone beam CT and not in conventional CT.

With a really huge gentry, it is really quite light in comparison to all the others, other products on the market. We have low dose protocols. And since it is a cone beam CT and not a conventional CT, no radiologist is primary required in the OR. The referencing with the navigation system is really exact, and it’s much easier than it used to be. And we have immediate control of the results to avoid second intervention or in children’s age, even a second anesthesia.

Robotic surgeries too coming up as well. As you see here the robotic arm from Brainlab. But there are other surgery devices where you can plan in virtual reality, really precise and overlay your plannings into the display.

And now, the last part coming up is the mixed reality. And I’m really looking forward for this part, because I want to have my hands free. And for just for the audience who don’t know the difference between virtual reality and mixed reality, in the virtual reality you have your eyes cut off from the environment around, you and he only see a computer generated image.

That brings…sometimes if you just have the VR goggles on, for example, you can…or sometimes really suffer under a see sickness because the eye brain connections is cut off. And the virtual or the mixed reality you see your environment, and then an additional information is displayed on your vision.

So these are goggles for virtual reality. And these are goggles for mixed reality. As you see, they’re…you can look through really good. And this is the goggles which is from Brainlab or which uses Brainlab. And therefore the picture you saw already, you can do a lot of fancy stuff, like having free hands and seeing the monitors directly over the body of the patient.

And this is what I want. I want to get out all of the monitors out of the OR and have some goggles, where I can look at them when I want to look, and then I get my hands free. Also here we can combine it with some microscopes and put in our plannings into the vision of the microscope, especially in neurosurgery field a really breakthrough.

For expert in teaching meetings, it’s really cool that you can see all together, one skull, one problem, and you can talk about it. You can teach students or colleagues, or you can ask questions. If you’re in the OR and your colleague is better in this field, you can just call him up and, “Have a look, see what I see, can you check the planning?” And so on. And this is how the models look like, doing the same thing like we did in our clinic. So, but I leave you better with the models, which are much more beautiful.

So we come to the conclusion. It is good to… The virtual planning, is a really good possibility for the surgical preparation before you go to the OR, so you know what you are dealing with later on. The potential complications can be identified in advance. And of course, it’s a really good teaching and training material. But it’s really important also to know that it isn’t a beginners’ tool, that you should know quite a lot about the system and about the anatomic maps in advanced. And it’s sometimes some is quite time consuming. But I hope that this is getting better in the future. And the combination with 3D printing, and back to reality is really good.

So it’s just a part of the journey, but it makes things much easier. That’s for me the conclusion of this talk. Thank you very much for your attention.

Matthias: Enzinger, thank you very much for giving that presentation. I think that is really exciting to then see that also with those specific patient examples, I mean, we talked so much about technology and how integration should provide more efficient or more optimized workflows for the operating room. But I mean, that really puts that in to a different perspective, talking about real patients and improving the actual outcome. So, thank you very much for that, and also for giving us a glimpse into the Matrix.

Simon, there is a question from the audience that I would like to forward to you. And that is, I think it comes from Portugal. And they ask, “Simon, if you have to put into a percentage, how much time do you spend in planning with software and modelling, and how much time with operating?”

Dr. Enzinger: : Actually it depends on the case. If you take the first case, it is quite fast. It takes me about five minutes for defining the defect, and another five minutes to get everything ready for the OR. The last part and the last case, it took me about three to four hours for planning in advance. Then it took me about another four to five hours with communication with the manufacturer of these implants. And it took me at least two hours, about that for the preparation for the OR.

So, the more difficult the cases, the more time you need in advance. But especially the last case wouldn’t have been possible without any navigation, it wouldn’t have been possible without any implants. And it wouldn’t have been possible without any other electronic devices like we used. So if we didn’t have these things, they OR would be much, much longer, much, much harder, and maybe not possible at all. And the lady would have lost her as her vision of this eye is so… Due to this operation, everything was fine after that.

Matthias: Yeah, Simon, and I’m also an engineer by trade. So I’m constantly looking into when collaborating with our R&D departments, how can we provide faster prototypes that we can get some hands-on experience? And it’s seems a bit like you have different, like similar challenges actually in your trade. So can you describe a bit how printing of 3D models, how that adds to your surgical workflow?

Dr. Enzinger: : Actually, in most of our tumor surgery patients, we use our house 3D printer. We print the skull or even only the lower jaw to prepare a place to analyze the tumor. Nearly every tumor patient has his own model. And therefore, you know, we use our own 3D printers, because due to that we can save a lot of money to… Yeah, because, yeah, to buy the models from the industry would be much too expensive. But due to the cheapness, we learned a lot of stuff, and our students learned a lot of stuff through that, that we can bring on our thoughts out of the OR and explain on the model to everybody, and talk, discuss about this case out of the OR before we go to into the OR.

Matthias: Yeah, and I think also the example that you mentioned, like with the mixed reality solution that you can somehow collaborate on the same model, of course, you’re lacking then a bit the haptic feedback that is sometimes not to be replaced. But that is a really nice experience that I would recommend to everyone else in the audience. So maybe, didn’t have that experience so far to have this joint collaboration.

Dr. Enzinger: : This is… I think the mixed reality is going to be a really key changing part, because then I don’t have to be in the same room or I don’t have to be in the same city at all and talk about some things. And especially in times from this pandemic, the students can sit at home and then watch. And you can explain exactly what do you mean, and they can explain back what you mean. Unfortunately, these type of goggles are quite expensive now. But I think they’re going to be cheaper and cheaper and cheaper in coming times.

Matthias: So the last question from the audience for you, because I’m looking at the clock, and we’re a little bit short in time here. Lauren is asking, “I think we need a new specialist for the digital OR for preparing images for the surgery. It seems to be very time consuming for surgeons and their IT knowledge is somewhat missing. What is your experience with that?”

Dr. Enzinger: : I think that nowadays it’s part of the preparation for the operation from the surgeon to do that stuff, because you learn so much about the patient due to this planning. And I think the planning is really important to get no harm to the patient. And if you do bring this to the IT department, there are so many information which are lost in the communication. I think this is impossible to do for an IT department, to plan things like that.

Matthias: Simon, thank you very much for that excellent presentation and answering the questions from the audience. So again, greetings to Austria. Thanks for joining, and talk to you soon.

Dr. Enzinger: : Thanks as well.

Matthias: So, our next participant in that panel is Dr. Chung. Dr. Chung is a spine medical surgeon from the U.S. He is a resident surgeon in Denver, in St. Luke’s Medical Centre in Denver, Colorado. Unfortunately, there has been a change in the surgical schedule approximately a week ago, so we had to pre-record that session. But I would really encourage you to stay tuned in, because Dr. Chung is really an excellent, world renowned expert in degenerative spine disorders, especially focused on very, very complex cases with special scoliosis deformities. And he’s also a real technology enthusiast, an especially an advocate of medical technology, pushing the barrier for patient care, surgical outcomes, and patient safety. So, please virtually welcome Dr. Chung.

Dr. Chung: It’s an absolute pleasure to share some of my thoughts and my dreams about where the OR is going in the future. I want to say thank you to Brainlab for allowing this opportunity. And it’s an interesting new environment this year, but please bear with us. This is a whole new world of digital technology. So, let me then switch over.

All right, thank you very much. I wanted to share some of my thoughts on the future of the OR, and how it can help us take care of more complex pathologies more efficiently and safely.

So a little bit about myself, my name is Woosik Chung. I currently live in Denver, Colorado. But I was born in the countryside of South Korea, and lived there until I was about six years old. At that point, my father moved us to Malawi, in Africa, where he was helping bring Western medical practices to Africa. So, I had a great time living in the jungles of Malawi. And grew up educated in the British boarding system. When I turned 14, our family then moved to the United States. And I’ve been trying to integrate into civilization since then.

Ultimately, I ended up following my father’s footsteps and went into the world of surgery. And I remember asking him early on if he had some pearls for me, what his thoughts were on the principles of surgery. And he told me there are three basic principles that he feels if you have a good grasp on, you can do any type of surgeries.

And I’ll tell you, it rings true for me, because he was trained as a vascular surgeon. But when he moved to Malawi, he was doing all the surgeries there, from neurosurgery to OBGYN, orthopedics, vascular surgery, general surgery. And he said, if you know the anatomy appropriately, and you’re able to visualize your surgical field properly, and you have the right tools, you can really take care of almost all pathologies.

And this thought process has run true for me. And as I’ve developed in my surgical career, I think this is more and more important. When I think of this current digital OR, it’s Harrison Ford, looking forward in 2019 as the original “Blade Runner.” But how about we look further forward into Ryan Gosling’s “Blade Runner,” looking forward to 2049? And so that’s where I see the future going. And my thought of what the next steps in the direction of future OR, is that it is no longer a simple room just housing equipment, and is a place for surgery. I feel that every equipment in the room is able to digitally communicate.

The room integrates a patient’s health database, not only as a passive data bank, but to actively protect the patient and create an efficient surgical process. The room takes in growing database of real-time outcomes measure to optimize its efficiency, safety and overall patient care. Ultimately, that room becomes what I would term as a smart OR, an intelligent environment that continuously learns via patient-pathology-treatment-outcomes database to optimize the surgical care of the patient, using an integrated and real-time accurate technology platform. Kind of like what we saw in the movie “Prometheus,” where a patient goes into this operating room, and the operating room can base the pathology and the treatment options using artificial intelligence.

So, what are the next steps? Well, the buzz in the situation for Brainlab is the brains of the smart OR. We must allow for accurate and ease of communication between the mainframe and the surgeons. The smart OR recognizes that patient coming in, and uses that patient-specific surgical checklist.

Furthermore, the system allows for real-time surgical dictation by recognizing each of the procedures, and creates an automatic surgical dictation. The smart OR also optimizes and regulates environmental control, such as temperature, humidity, airflow, etc., to keep the OR environment as safe as possible for that specific patient and the procedure that the patient is undergoing.

Furthermore, the OR is linked with SPD to make sure that the correct equipment is brought for the specific case and the specific surgeon. Overall, the data from each surgery performed in the smart OR is then collected and used not only to improve the smart OR environment, but also ultimately allows the development of what I would call an artificially intelligent OR.

Furthermore, we need to also improve our navigation system, our actual equipment, which may include better camera reference frame system, developing an augmented reality system of mapping the anatomy and pathology of the patient, and also mapping the entire room that allows us to develop more interactive options, such as an active robot system that can enhance the capability of the surgeon, and keep the surgical environment and patient as safe as possible.

So, what about the communication? Well, the communication system between the surgeon/team and the buzz or the OR mainframe needs to allow for multiple avenues of communication, both non-sterile and sterile, as well as making sure that the accuracy is as maximum, and the disruption is as minimal as possible.

You know, we can talk about options that include not only such examples as voice and gesture control, but that room may be able to digitally video map the whole room, the movement of each of the individuals in the room, and also recognize individuals in the room, such as a specific surgeon, or a specific nurse, and their own preferences. And we can use things like thermal and IR mapping to understand and to react to the flow of the room much better.

Furthermore, these rooms then can be a protector for the patient. First, do no harm. By having all of the patient’s data acquired by the smart OR before the surgery, it can make sure to protect the patient by locking out any allergic medications or materials, unless overtly overruled by the staff.

And it can monitor and optimize the patient status based on prior surgical experiences and data points of that specific patient. Each case is preceded by a digital timeout, especially with the site and levels, allergies, etc. During the surgery, that smarter OR can continuously monitor the site, maybe using a digital camera system for the correct site, and the correct levels by digitizing that patient’s data and the patient’s imaging intraoperatively, and warns the surgeon if there’s a deviation from the planned surgery site or the levels.

So what are these preop data? Well, in terms of preop data, it isn’t just the typical preoperative X-rays or advanced imaging that is DICOM converted and downloaded to the smart OR. But it is all the patient data, including their medical records, their allergies, their past medical history, their physical exam findings, which allows us to better integrate the data for the protection of the patient during the operation, and then also have a linear translation to the outcomes that follow afterwards.

So, by acquiring and digitizing the preoperative information, including the imaging studies, and without the gravity along with medical information such as osteoporosis, or osteoporosis, this can help with the overall surgical planning before the surgery, but also while the surgery is happening to help correct any variables. This is another thing that could be incredibly useful, and also time saving especially for a lot of us surgeons. Currently, surgeons spend a significant amount of time after surgery dictating the operative reports. The more complex the case, the longer the dictation takes.

There are templates and other electronic systems that try to make the dictations easier, but these are also limited by the generalization of the templates. Furthermore, dictations can be played by either simplicity due to the generic dictation to expedite the report, or it can be limited by the memory of the surgeon themselves, especially if the dictation is done at a later time.

Therefore, by creating a method of directly observing the surgery, such as by a digital camera on the surgeon, that is able to recognize the surgeon specific techniques, it can dictate the report with accuracy, and it’s conducted real-time. These reports can then be reviewed afterwards and signed off by the surgeons. This smart dictation system is not just limited to surgical dictation, but it can also be a nurse charting option, the anesthesia report option, etc. Overall, more than the challenges of the technology, it may be the medical legal concerns that need to be sorted out that may be the biggest challenge in this situation.

Furthermore, a smart OR will allow us a real-time environmental control of the room, not just for comfort, but really for things like infection control based on that patient. An important contributing factor as we know to OR sterility and infection control is the environmental factor of the OR. Not only is it important for the OR to be able to control the HVAC system for that room, such as airflow, temperature, humidity, and door openings, but also to take into account the contribution of patient factors such as diabetes, and surgical duration. Furthermore, the smart OR can then collect the correlating post up data of such things as infections to stratify possible environmental factors that were contributory. Overtime that database continues to grow and it can allow the development of an artificially intelligent environmental control system.

As we know the “kitchen” of the surgical department is the sterile processing department or SPD. It is here that the equipment is not only sterilized but appropriately placed in the correct trays. Obtaining and then opening incorrect trays for the procedures or specific surgeon not only creates inefficiencies for that case, but it could delay another case that actually required that specific tray, and ultimately results in an increased cost.

Process has the smart OR recognize not only the case, but also the specific surgeon to be able to communicate to the SPD to obtain the correct tray. This is especially poignant if the planned surgeon is not the one that ends up doing the surgery, but somebody else has to come in and help out or conduct the surgery, then the OR knows to let SPD know to obtain the correct tray. This is very efficient, and it also decreases the angst in the room as well as, of course, the overall cost and efficiencies.

Overall, the smart OR ultimately develops its AI system, that allows for the best care of the patient by collecting the data as it obtains preoperative information, and then stratifies different data points. It takes in intraoperative data points as well based on the surgery along with the preoperative data, including the type of surgery, anesthesia factors, environmental factors, and other intraoperative data points such as complications, neuro monitoring data, etc., etc.

Furthermore, this then allows a digital surgical dictation that really puts together a digitized form of what was actually done in the surgery. And therefore it allows a very much easier way of stratifying and correlating the data. The system then furthermore collects longitudinal postop data, while the patient is either as an inpatient as well as when they come back for postop visits as an outpatient, and that data is collected digitally. That data is then analyzed. And as the number of patients increases, it can start creating an artificially intelligent processing system that can help with optimized patient care. And that’s the big loop that we’re seeing here.

Well, to get to that, we’ve got to make sure our technology, our actual equipped in the OR is as efficient and accurate as possible. To best take care of the patient’s surgery, we need the best system for safety and accuracy. This is especially poignant in cases such as long segment constructs, revision surgeries, poor bone quality, or hypermobile spine. Currently, the mainstay of imaging is a robust camera to reference frame system. However, as anybody who’s used these current navigation systems, this can be very limited and challenging. So, we need to look to further options such as multiple reference frame arrangement system, or a completely different system such as visual anatomy tracking, using virtual reality cameras, or the use of a hybrid system with reference frames and with augmented reality.

So by using multiple navigation technologies at the same time, such as our regular reference frame on the patient that is linked to a camera, versus also using an augmented reality camera, we may be able to increase the overall accuracy, and then ultimately the safety. This is being looked at with multiple studies. But I don’t know if we’re quite there yet.

So, where is the role of digital visualization? So, first, we need to define what virtual reality is, what mixed reality, and what augmented reality is. And with these, we can then create a system that overall creates what I would call a super reality. It is an environment that we can easily move from virtual reality, where the entire simulation is virtual, to a mixed reality or augmented reality where we can superimpose certain virtual data points onto the physical reality, allowing us to do our cases. And these can also be used for training and educational purposes.

In the preoperative setting, the systems can allow us to combine and integrate multiple different imaging modalities together and allow us to visualize the anatomy and pathology in different layers and dimensions, which can enable us to better plan the most effective and minimally invasive surgery. Furthermore, this allows us to better and more effectively communicate and educate our patients in the preoperative setting. Intraoperatively, we can take the integrated and preop images and combine them with the intraop acquired images by using digital bending technologies to match the preop supine image to the intraop prone anatomy.

We can then take these images and superimpose them over the patient, creating the augmented reality visualization, which then allows the surgeons to keep his eyes on the surgical field rather than back and forth with monitors to the patient. And if needs be, you can literally push the image next to the surgical field so you can see the real field as well as the virtual field next to that surgical site.

These technologies have mixed virtual and augmented reality systems, can really allow for more tactile education and training as well. This could especially be useful in training settings where residents and fellows can practice and learn at their own time, and also in settings where we may be limited for direct contact, such as this pandemic.

Intraoperatively, options for anatomy acquisition and navigation can be expanded to augmented reality anatomy digitization, where the use of surface matching systems such as this here that you can see can map the anatomy and allow for accurate navigation system, giving us more options rather than just the typical camera to reference frame system.

Validation studies have also shown that it is not only feasible to use these augmented reality navigation systems in minimally invasive cases, but that the technology has improved accuracy significantly.

Furthermore, these systems can then allow us to do what I term as dynamic tracking. As these augmented reality camera systems improve, it can allow us segmental tracking of the anatomy as the surgeon is sequentially correcting the architecture of this spine for example, using techniques such as osteotomies or placement of inner body spacers, doing derotation techniques or compression techniques. And this system allows live tracking so that we know the anatomy in three dimensions, and we can place that augmented reality on top of the anatomy as the anatomy is being corrected. This may influence the number of levels needed as we are doing the actual surgery.

So, how do we interface in the OR with reality, virtual reality, mixed reality and augmented reality? Well, there are already multiple platforms currently being developed and tested. Such a device is the Magic Leap system in collaboration with Brainlab. Here, we have to look at multiple factors in bringing in such an interface system into the OR, such a strict sterility, the actual communication interface options. It allows multiple levels or layers of reality, such as absolute virtual reality, all the way to allowing us to directly just look at the anatomy itself with nothing in between.

The ergonomics is important, especially for longer surgeries if we’re wearing these systems on our heads for the duration of the surgery. And it needs to be an active system that may help us not only in the visualization of the anatomy, but also in helping us warn us to keep the patients as safe as possible if there is a possible danger nearby.

These augmented reality systems are quickly expanding and improving for different modalities. For example, were able to use these systems to place pedicle screws, as well as in rod bending as you see in this virtual reality image, where there you can bend the rod to this image that is being shown in front of your camera.

The next step therefore in the development of the smart OR is the seamless integration and communication of all of these technologies to a platform that allows the system to intelligently coordinate the use of all of the patient specific data and the known outcomes, with the actual specific equipment in the OR for the efficiency and the safety of the patient care.

By bringing and optimizing all of these OR technologies, and integrating them, then we can begin to get ready for the revolution of the OR, basically what I term as the smart OR.

So, the revolution allows the emergence of the smart OR. This revolution is that the system allows for continuous learning, and optimizing the care of the patient, which then overall creates a system that leads to what we would call self-awareness. We just hope that the self-awareness doesn’t become SkyNet, but rather the robot Baymax from “Big Hero 6.”

So what are the next immediate steps in bringing the smart OR to fruition? Well, we need to develop a platform that is hyper intuitive to use, as for example an iPhone. We need real-time and absolute accuracy of the anatomy to equipment communication. Each equipment needs to seamlessly communicate with each other, as well as preoperative information and postop outcomes data. And these acquired outcomes data are continuously analyzed to determine the pertinent factors in optimizing the care of the patient, and minimizing complications. All of which ultimately allows us to bring together the smart OR or an artificially intelligent system.

So, one of the concerns is that, “Hey, these robots are going to steal my job.” Well, you know what, that’s okay. I can retire, maybe go snowboarding, go learn how to drive a car, practice martial arts and hang out with my family. But really, I can move upward towards my own self-actualization. So, I don’t think we need to be scared about an artificially intelligent system if we can truly develop and bring it to fruition. So that’s where I term this digital spine surgery as smart digital spine surgery.

Thanks very much. I hope this was interesting and may have stimulated some of your own thoughts, and hopefully future visions. Take care, bye-bye.

Matthias: Fantastic. So, thank you, Dr. Chung. Not only did I learned that there is a new Blade Runner movie, I wasn’t aware of that, to be honest. But also I learned a lot about what technology can further drive the healthcare in the field of spinal surgery. And I think that is a really, really exciting surgical feed. So thanks for bringing up all those examples, really exciting.

So last, but not least, we have speaker number four of the technology panel today. So I’m happy to introduce Jennifer Esposito. Jennifer is Vice President at the famous company Magic Leap, providing the mixed reality solutions that we mentioned one or two times today.

And Jennifer has worked in the past for over 13 years for GE Healthcare. And she has been General Manager of the Health and Life Science Group at Intel Corporation. So when I met Jennifer for the first time a couple of years ago, when back then we were still working on that top secret project that Jennifer mentioned yesterday, I think, I could sense that she is a really strong believer in artificial intelligence, but also 5G and computer vision.

And that specifically those technologies would have a big influence on transforming the healthcare, and providing better quality of life, safety, and security worldwide. So, Jennifer, hello to Florida. And now that we have heard also great examples from how mixed reality does improve vascular surgery this morning, then CMF, then EMT surgery and chest mount spinal surgery, do you feel a little bit pressured?

Jennifer: Yeah, lots of pressure, right. I think everybody really knows what all of the potential is. So it’s super exciting to be here today.

Matthias: So thanks for joining us, Jennifer.

Jennifer: Thank you.

Matthias: So Jennifer, you are going to talk to us about spatial computing in digital health.

Jennifer: Yeah, absolutely. So, I really wanted to… First, thanks, everybody for having me here today. Really amazing presentations over the last day and a half or so that really nails the potential for spatial computing and mixed reality in surgery.

So, I’m going to tell you a little bit about Magic Leap. And then I’m going to spend most of the time talking about some of the use cases outside of surgery that I think are really interesting for people to consider. So, you know, Magic Leap began with an idea about bringing the physical world and the digital world together. And I think the really nice thing about that is that it became clear that our technology has the potential to really amplify human potential. And I think that’s a lot of what I heard the speakers earlier, talking about. So really, what you’re doing is augmenting the physical world with digital content and bringing it together in new ways. Sorry for my voice.

So, you know, spatial computing, when you think about how to define it is really about allowing 3D, the 3d space around you to become this big canvas for everything that you’re doing, so it becomes your digital user interface. And it really eliminates that fixed user interface that we’re used to today with both laptops and phones and iPads, and all sorts of things.

And I think I heard a lot of that also from the different presenters that we just had here over this last session. So I think it’s important to step back to you and just make sure that everybody is level set on, you know, what this really means and why spatial computing or mixed reality is different, you know, from other technologies that are out there.

So if you start with things like, with VR or virtual reality, you know, your mobility is really restricted, because you can’t see the physical world around you, you’re completely immersed and isolated essentially from the physical world. So that definitely has some limitations from the perspective of the kinds of use cases that you can apply.

If you think about the augmented reality that you can do today on either a smartphone or a tablet, you’re still holding up a screen, and your hands aren’t free. And the content itself really isn’t aware of the physical surroundings. So, what we’re focused on here with spatial computing is really this interaction between both the physical and the digital world, and an awareness of…the digital content having awareness of what that physical world is.

So that really enables a different level of collaboration with 3D content. It allows the sharing of physical spaces in the digital world so that people really don’t have the limitations of geographic boundaries, for example. And that’s the kind of sort of scenario that I think is even more obvious, more present today, given all of the restrictions we’ve had around being physically together with the pandemic.

And I think the other important point to mention here is not only are your hands free, but you really are fundamentally becoming free of screens, any type of physical screen that’s out there. And I think a lot of the prior examples that we saw earlier today in the operating room, you know, show how you can take away a lot of that physical real estate that is really limiting the spaces and limiting also the efficiency of the user interface for surgeons and other clinical workers.

So I think Brainlab has done a really amazing job taking the current capabilities of spatial computing today, 3D visualization, collaboration, and really making them real. And then outlining a really amazing vision for digital surgery in the operating room of the future.

So what I’m gonna focus on today is the kind of stuff that, you know, we’re working on beyond surgery or outside of surgery. Because I think it’s really important having listened to all of the great stories over the last couple of days that it’s clear, this audience really understands and appreciates how spatial computing can be used in surgery and in the operating room.

And so I thought it would be good to just spend the last few moments of this wonderful symposium talking a little bit about what we call spatial health or the use cases that extend beyond the hospital. And so I think, you know, first of all, when you think about leveraging spatial computing, it’s really about helping extend away from the hospital, right?

So as you think about the current climate that’s pushing towards more digital versions of both, you know, medtech, healthtech, fittech, right, all of these different sort of terminologies around how you can impact healthcare and a person’s health, the real capability that spatial computing brings is its ability to sort of start to shift care that isn’t acute, that isn’t specialized outside of the hospital into the community or into even the home, and also extends that, you know, not just to people that are already sick or already have some sort of condition, but to people that are, you know, young, healthy well, and having capabilities to help them maintain that.

And so we see a really broad spectrum of use cases. And I actually call them use case categories, because once you start digging in, you know, you’re really talking about probably hundreds of potential use cases that leverage spatial computing. But all of these additional technologies that are around it, so Matthias mentioned that I’m, you know, I’ve always been super interested in things like AI and 5G and IoT.

And if you think about the convergence of all of those things today, it’s really going to enable a really new form of health and wellness as we go into the future. So there’s lots of use cases across the board. You can kind of see what we’ve laid out here, many of them were mentioned previously in the presentations throughout the symposium.

But what I’d like to do is, you know, really focus in on, you know, what’s happening very much in the real world now? You know, some of it is driven by certainly the environment that has sort of come from COVID-19, and this really very advanced focus that we’ve seen over the last several months and acceleration that we’ve seen over the last several months around telehealth, right?

So just here are some data points from a McKinsey study that came out a couple months ago following the pandemic. And you can kind of see that there’s really widespread interest, both from consumers and providers in moving as much care as possible outside of hospitals and into the home.

And so that means things that aren’t acute, and that also means leveraging both the workforce that’s out in communities, it means leveraging family members of patients, right? But there’s lots of opportunity here to shift some of that care. And so some of the estimates from this study really show that there’s opportunity, right?

You know, in the case of the graphic on the right hand side, you know, from a Medicare, Medicaid perspective here in the U.S. right, they’re stating that there’s the potential actually to move about 20% of some of the care to digital means. So on the next couple of slides, I’m going to show you a few of the partners that we’re working with today, and solutions that are really focused on enabling this kind of activity to be moved outside of physical hospitals and in clinics like that.

So the first example that I’ll talk about is something called Eye-Sync from a company called SyncThink. We’ve been working with SyncThink for quite some time. And they are real experts in using eye tracking for clinical purposes. They’re very much focused on brain health, and performance. And they’re working with the Magic Leap device and eye tracking to enable monitoring and improvement of brain health. In some cases, that includes things like using the device and eye tracking from their perspective to diagnose conditions, for example, like concussion, or to help athletes with performance management and fitness tracking. So, lots of really interesting potential additional use cases for what SyncThink is working on. And it really shows that, you know, spatial computing has a potential to really enable new forms of diagnostics and health monitoring in the future.

The second example that I’ll show you is from a company called Heru, they come out of the Bascom Palmer Eye Institute here in Miami. And they are focused on creating AI-driven, personalized vision diagnostics, and ultimately augmentation. And so if you think about, you know, the population out there, there’s approximately 450 million patients worldwide that are experiencing some sort of visual field defect or double vision, right, that can come from a variety of different clinical conditions like glaucoma, or stroke, or macular degeneration. And they’re using the Magic Leap device to diagnose some of these conditions. So, replicating on the device visual fields tests and other diagnostic tests to detect these conditions, using real time eye tracking, as well. And then ultimately, they’re moving also towards using the same device for actual augmentation or correction of some of those conditions. So some really interesting work going on here in the field of ophthalmology, that leverages, again, eye tracking and other capabilities of spatial computing.

And then the last example I’ll talk about is one from our partners, CSS. CSS is an insurance company in Switzerland. And they really see potential to use spatial computing to enable advanced movement tracking, to deliver 3D virtual therapies, right, so either physical therapy or other sorts of rehabs. And, you know, I think the interesting thing here is that, again, the device is not only being used to deliver the therapy in the sense that the patients are able to visualize in 3D a therapist or a coach, but the device is also helping measure and understand how well the patients are actually doing the different exercises. So by measuring the different movement capabilities, the patient can then actually get pretty much real-time dynamic feedback about how well they’re doing. And so this really presents an interesting opportunity both to track a patient over time, to automatically update using AI, the kinds of therapies that they’re undergoing, and really bring this capability to scale in the sense that it can occur wherever a patient is at, it doesn’t need to happen in a physical location.

So I think these are also interesting to consider in the context of, you know, how we’ve been talking about digital surgery and the digital OR today. Because many of these capabilities help extend that whole experience, from the perspective of a patient, for example, that might be having knee surgery, the ability to really understand their movement and their other physical factors, both before and after the surgery. And coupling that with all of the additional information that is being captured as part of the overall digital operating room activities really presents an interesting opportunity for a much more end-to-end and personalized approach.

And so that’s my last slide. I’m going to turn it back over to Matthias. I really hope that this was just a good introduction into some of the different use cases that expand, you know, to a broader concept of digital health using spatial computing. So thank you.

Matthias: So, thank you very much, Jennifer. I think that was a fantastic presentation. And I think that it was it was a really nice last presentation because it took away a bit the focus into a wider range to showing also the possible applications outside the actual operating room. And I think that was a topic that we covered one or two times today, at least that the operating room doesn’t end in that one specific room, but it’s really a complete patient journey, a complete healthcare workflow. And I think that were excellent examples completing that picture. So there’s a question from the audience to you.

Jennifer: Sure.

Matthias: “Jennifer, do you think there is a future of combining spatial and telehealth to those areas that are more remote to bring them specialties or physicians that are missing?”

Jennifer: Yeah, I think that’s actually one of the most important, you know, sort of bigger picture points about spatial computing, right? Because I think it really helps eliminate the boundaries of geography. Not only are you able to use spatial computing to visualize digital content in a variety of different locations and share it regardless of where you’re at, but you can actually have the benefit of an enhanced collaboration. So not just the 2D video stream, for example, but a 3D co-presence capability, so that patients can interact with their doctors and doctors can actually see their patients in 3D to really enhance what a physical exam might be in a digital setting.

And so I think, you know, if you think about this, it really does really make it possible in the future for patients to really access specialists, regardless of where they live. And that means that, you know, really the right clinical expert can be called in at any time to consult on cases, and to really make sure that in some really unique scenarios, right, you’re not limited by where you’re physically at.

And, you know, I think there’s been lots of studies over the years that show that a lot of how people experience healthcare is very much dependent on geography. And I think that’s something that spatial computing is really going to help address, it’s sort of going to eliminate this geography as destiny sort of scenario that I think is very true today.

Matthias: Thank you, Jennifer. And I think essentially, that is also what we are trying to do here in the past one and a half days, right? We are trying to share knowledge throughout the planet that we have gathered, building operating rooms, setting up state of the art healthcare. And I think we have had excellent examples from Brazil, also a couple of examples from China, of course, many of Europe.

And I think that is really what we try to achieve in the past two days, bring together a lot of experts in this virtual way, also in a new format for us, and share the best practices and share that knowledge. So with that being said, I would like to conclude that online symposium 2020.

And maybe allow me a quick summary of what we have experienced in the past two days. I think we discovered, obviously, a lot of points regarding technology and engineering. We have had biomedical engineers, but of course, also really great examples from the industry. And I think technology, that part is covered. That is my take away, that is not the issue, right?

Then we had the second point of planning operating rooms, setting up the IT infrastructure, and also the processes to make that technology become reality and really provide an impact. I think this is something where we can improve a lot in in our projects in the next couple of years.

Then we had the third part, maybe the most important one is, of course, the user perspective. We have heard excellent examples from different surgical disciplines on how technologies and also those IT processes and the integration processes will provide better outcomes and more efficient procedures in the future. That was, I think, really exciting.

And we have also heard from the nursing perspective, how an optimized workflow and also training capabilities can facilitate running more efficient operating rooms. So that is, I think, what was really exciting takeaway for me also, and that is definitely a target audience that we would like to include more in the next couple of years in this kind of discussion panels.

So, I think that all of us all the different examples that we mentioned, everyone that was participating in that auditorium, either as a presenter or as a listener, or as a kind of in between writer, I think we as a healthcare community, we are really making a difference here. And there were really few fascinating examples how this will become reality in the next couple of years.

And that is just a really exciting thing that I think every one of us should take home with us now and start that weekend in a really nice way. So thank you very much for your time and being with us, sharing your knowledge and also trying to learn a couple of new things.

There is one minor thing of course, the recordings will be available to you sometimes afterwards, we need some kind of post processing to cut that into different pieces. But then you will also have access to that content later on. So thank you very much for participating in the digital OR online symposium 2020.

相关网络研讨会

OP_Raum_Wireframe_Composite_#1828_OK
数字化手术室耳鼻喉科+ 2
Digital ENT Operating rooms – Today’s technology and what’s coming in the future

Brainlab invites you to join our live webinar, Digital ENT Operating rooms – Today’s technology and what’s coming in the future, on January 13, 2021 at 4:00 pm CET presented by Prof. Dr. Stefan Mattheis, MHBA, Vice-Director of the ENT …

Prof. Dr. Stefan Mattheis
digital-operating-room
数字化手术室Digitalization+ 1
Building a Digital O.R. Roundtable Discussion

Creating the perfect operating room with the latest technology that suits the needs of different departments is a complex task. During our discussion, you’ll hear the opinions and experiences of various key roles involved in the process of building a …

Stefan Mattheis
Marcello Bonfim
Thomas Fritsch
David Krusch
Philipp Wolf
digital-o.r
数字化手术室Digitalization+ 1
The Benefits of a Digital O.R. in Daily Practice

Digitization in hospitals is a topic of growing importance, especially in the O.R. In this session, we will bring together different perspectives from around the world to share the advantages of having a digital O.R. and how it has changed …

Xavier Escayola
Albert Busch
Daniel Köbsch

查看更多即将开展的网络研讨会

马上注册