This application is a divisional patent application of U.S. utility patent application Ser. No. 13/353,612, filed Jan. 19, 2012, which claims priority to U.S. provisional patent application No. 61/434,242, filed Jan. 19, 2011; the entire disclosures of which are incorporated herein by reference in their entireties.
This invention was made with government support under grant numbers EY019411 and RR024128, awarded by the National Institutes of Health. The government has certain rights in the invention.
The presently disclosed subject matter relates to surgical instruments and imaging equipment, and more specifically, to surgical imaging and visualization systems, instruments, and methods using optical coherence tomography.
Optical coherence tomography (OCT) has emerged as a promising imaging modality for micrometer-scale noninvasive imaging in biological and biomedical applications. Its relatively low cost and real-time, in vivo capabilities have fueled the investigation of this technique for applications in retinal and anterior segment imaging in ophthalmology (e.g., to detect retinal pathologies), early cancer detection and staging in the skin, gastrointestinal, and genitourinary tracts, as well as for ultra-high resolution imaging of entire animals in embryology and developmental biology.
Conventional OCT systems are essentially range-gated low-coherence interferometers that have been configured for characterization of the scattering properties of biological and other samples. By measuring backscattered light as a function of depth, OCT fills a valuable niche in imaging of tissue ultrastructure, and provides subsurface imaging with high spatial resolution (˜1-10 μm) in three dimensions and high sensitivity (>110 dB) in vivo with no contact needed between the probe and the tissue. OCT is based on the one-dimensional technique of optical coherence domain reflectometry (OCDR), also called optical low-coherence reflectometry (OLCR). See Youngquist, R. C., S. Carr, and D. E. N. Davies, Optical Coherence Domain Reflectometry: A New Optical Evaluation Technique. Opt. Lett., 1987. 12: p. 158; Takada, K., et al., New measurement system for fault location in optical waveguide devices based on an interferometric technique. Applied Optics, 1987. 26(9): p. 1603-1606; and Danielson, B. L. and C. D. Whittenberg, Guided-wave Reflectometry with Micrometer Resolution. Applied Optics, 1987. 26(14): p. 2836-2842. In some instances of time-domain OCT, depth in the sample is gated by low coherence interferometry. The sample is placed in the sample arm of a Michelson interferometer, and a scanning optical delay line is located in the reference arm.
The time-domain approach used in conventional OCT has been used in supporting biological and medical applications. An alternate approach involves acquiring as a function of optical wavenumber the interferometric signal generated by mixing sample light with reference light at a fixed group delay. Two methods have been developed which employ this Fourier domain (FD) approach. The first is generally referred to as spectrometer-based or spectral-domain OCT (SDOCT). SDOCT uses a broadband light source and achieves spectral discrimination with a dispersive spectrometer in the detector arm. The second is generally referred to as swept-source OCT (SSOCT) or alternatively as optical frequency-domain imaging (OFDI). SSOCT time-encodes wavenumber by rapidly tuning a narrowband source through a broad optical bandwidth. Both of these techniques can provide improvements in signal-to-noise ratio (SNR) of up to 15-20 dB when compared to time-domain OCT, because SDOCT and SSOCT capture the complex reflectivity profile (the magnitude of which is generally referred to as the “A-scan” data or depth-resolved sample reflectivity profile) in parallel. This is in contrast to time-domain OCT, where destructive interference is employed to isolate the interferometric signal from only one depth at a time as the reference delay is scanned.
Surgical visualization has changed drastically since its inception, incorporating larger, more advanced optics toward increasing illumination and field-of-view (FOV). However, the limiting factor in vitreoretinal surgery remains the ability to distinguish between tissues with subtle contrast, and to judge the location of an object relative to other retinal substructures. S. R. Virata, J. A. Kylstra, and H. T. Singh, Retina 19, 287-290 (1999); E. Garcia-Valenzuela, A. Abdelsalam, D. Eliott, M. Pons, R. Iezzi, J. E. Puklin, M. L. McDermott, and G. W. Abrams, Am J Ophthalmol 136, 1062-1066 (2003). Furthermore, increased illumination to supplement poor visualization is also limited by the risks of photochemical or photothermal toxicity at the retina. S. Charles, Retina 28, 1-4 (2008); J. R. Sparrow, J. Zhou, S. Ben-Shabat, H. Vollmer, Y. Itagaki, and K. Nakanishi, Invest Ophthalmol Vis Sci 43, 1222-1227 (2002). Finally, inherent issues in visualizing thin translucent tissues, in contrast to underlying semitransparent ones, require the use of stains such as indocyanine green, which is toxic to the retinal pigment epithelium. F. Ando, K. Sasano, N. Ohba, H. Hirose, and O. Yasui, Am J Ophthalmol 137, 609-614 (2004); A. K. Kwok, T. Y. Lai, K. S. Yuen, B. S. Tam, and V. W. Wong, Clinical & experimental ophthalmology 31, 470-475 (2003); J. Lochhead, E. Jones, D. Chui, S. Lake, N. Karia, C. K. Patel, and P. Rosen, Eye (London, England) 18, 804-808 (2004).
SDOCT has demonstrated strong clinical success in retinal imaging, enabling high-resolution, motion-artifact-free cross-sectional imaging and rapid accumulation of volumetric macular datasets. N. A. Nassif, B. Cense, B. H. Park, M. C. Pierce, S. H. Yun, B. E. Bouma, G. J. Tearney, T. C. Chen, and J. F. de Boer, Optics Express 12, 10 (2004); M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, Optics Express 12, 2404-2422 (2004). Current generation SDOCT systems achieve greater than 5 μm axial resolutions in tissue, and have been used to obtain high resolution datasets from patients with neovascular AMD, high risk drusen, and geographic atrophy. M. Stopa, B. A. Bower, E. Davies, J. A. Izatt, and C. A. Toth, Retina 28, 298-308 (2008). Other implementations of OCT including SSOCT may offer similar performance advantages.
Intraoperative guidance of surgical procedures using optical coherence tomography (OCT) holds promise to aid surgeons visualize microscopic tissue structures in preparation for and during surgery. This potential includes visualization of the critical interface where surgical tools (e.g., scalpels, forceps, needles, scrapers) intersect and interact with tissue surfaces and sub-surface structures. In many cases, critical aspects of this dynamic interaction exceed the spatial or temporal resolution of conventional imaging devices used during surgery (e.g., surgical microscopes, endoscopes, ultrasound, CT, and MRI). A particularly compelling case for OCT guidance is in ophthalmic surgery, since OCT is already a widely accepted diagnostic in ophthalmology, and real-time visualization of delicate and translucent tissues during intrasurgical maneuvers could be of great benefit to surgeons and patients. In ophthalmic surgery of both anterior and posterior segments of the eye, the typical imaging modality used to visualize microscopic tissue structures is stereo zoom microscopy. Surgical microscopes provide real-time natural color imaging to the surgeon, however the quality of the imagery is often severely limited by the available illumination and the quality of the patient eye's own optics, particularly for retinal surgery. Additionally, conventional surgical microscopy only provides en-face imagery of the surgical field, which bears little to no depth information, forcing the surgeon to infer when instruments are in contact with tissue surfaces, how deep instruments are penetrating, how thick tissue structures are, etc. As a cross-sectional imaging modality, OCT is particularly well suited to providing critical depth-resolved information in ophthalmic surgery. The advent of Fourier domain OCT (FDOCT) approaches including both SDOCT and SSOCT is especially promising for providing real time feedback because of their enhanced SNR compared to prior time-domain OCT methods, thus enabling much faster imaging than previously available.
FDOCT systems have been developed for use during surgery, including breast cancer biopsy and surgery and ophthalmic surgery of the anterior segment and retina. High speed FDOCT systems are available, including research SSOCT systems now operating in excess of 5,000,000 A-scans/sec, corresponding to tens of volumes per second. However, these systems are very complex and expensive, and require illumination light levels which may not be safe for human ocular exposure. Even with advances in higher speed OCT scanning, real-time complete volumetric imaging in the intrasurgical setting, where OCT imaging must be safely compatible with other bright light sources, may not be achievable. Thus, there is a need for systems, equipment, and techniques for using current or near-future generation OCT systems to provide useful, real time feedback to the surgeon.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Disclosed herein are microscope-integrated OCT (MIOCT) systems that integrate OCT imaging into the optical path of a surgical microscope for direct and OCT imaging. As a result, fast feedback is provided to the surgeon with less intrasurgical disruption than previous intrasurgical OCT protocols that require the surgeon to use a hand-held OCT probe during intermittent pauses during surgery. According to an aspect, an MIOCT system utilizes a standard current-generation SDOCT engine which images at a rate of 15,000-40,000 A-scans/sec, ˜20 B-scans/sec (with 1000 A-scans/B-scan), and acquires a complete 3D volumetric image in ˜5 sec. The systems and methods disclosed herein provide useful, near real-time intrasurgical imaging by using current or near-future generation OCT hardware in combination with a feedback control system to localize OCT image data acquisition to the region of the tip of the surgical tool, or the site of its interaction with the tissue. This occurs by tracking some predetermined feature of the surgical tool (such as its tip, some markings made upon it or indented into it, or a reflective spot or light source located on it) and using that position information to direct the OCT system to concentrate image acquisition in a region relative to that position which is of anticipated importance during surgery. A variety of OCT scan patterns, protocols, and displays are disclosed, which may be of particular value for guiding surgery, such as small numbers of B-scans (which can still be acquired in real time as perceived by the surgeon) acquired with specific orientation relative to the tool tip, small volumetric datasets (which can still be acquired in real time as perceived by the surgeon) localized to the tool tip location, or other novel combinations of A-scans. In addition to providing real-time imaging capability, the image data thus acquired can also be used in combination with the saved instrument track data and image processing techniques to build up and maintain an evolving three-dimensional (3D) rendition of the entire operative field of view. The control system may also perform adaptive sampling of the field of view, for example directing the OCT scanner to prioritize filling in missing information when tool movement reveals a previously unsampled region of retina, which had until then been shadowed by the tool. Thus, in this disclosure, an intelligent feedback control system is disclosed that can be readily modified to the surgical style of the surgeon.
According to an aspect, a method for OCT imaging includes receiving multiple OCT B-scans of a field of view area that includes an instrument. The method also includes applying spatial compounding to the B-scans to generate an OCT image of the area of the field of view area.
According to another aspect, a method for OCT image capture includes determining a location of a feature of interest within an operative field. The method also includes determining a relative positioning between the feature of interest and an OCT scan location. Further, the method includes controlling capture of an OCT image at a set position relative to the feature of interest based on the relative positioning.
According to another aspect, a surgical microscope system includes a heads-up display (HUD). The system also includes an ocular eyepiece unit having the HUD integrated therein for display via the ocular eyepiece unit. Further, the system includes a user interface controller configured to determine surgical information associated with a surgical site image projected for view through the ocular eyepiece unit. The user interface controller is also configured to control the HUD to display the surgical information.
According to yet another aspect, a surgical instrument for use in optical coherence tomography (OCT)-imaged surgical procedures is disclosed. The surgical instrument comprises a body having a predefined shape for improving capture of OCT images of nearby tissue during a surgical procedure, or being made from a combination of materials which optimize its appearance in OCT or video images or reduce the amount by which the instrument shadows the underlying tissue in OCT imaging.
The foregoing summary, as well as the following detailed description of various embodiments, is better understood when read in conjunction with the appended drawings. For the purposes of illustration, there is shown in the drawings exemplary embodiments; however, the presently disclosed subject matter is not limited to the specific methods and instrumentalities disclosed. In the drawings:
The presently disclosed subject matter is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or elements similar to the ones described in this document, in conjunction with other present or future technologies.
Like numbers may refer to like elements throughout. In the figures, the thickness of certain lines, layers, components, elements or features may be exaggerated for clarity.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the presently disclosed subject matter. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Further, it is noted that although the term “step” may be used herein to connote different aspects of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
The presently disclosed subject matter is described below with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products. It is understood that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
In some embodiments, OCT imaging of a surgical specimen or operating field in conjunction with standard surgical microscopy may be used to give the surgeon an additional, two- or three-dimensional view of structures which may be difficult or impossible for the surgeon to visualize with a standard microscope alone. These structures may be difficult to visualize because they are beyond the resolution limit of the microscope optics or of the surgeon's eye, or are poorly lit, translucent, opaque, or buried in a translucent or opaque structure. OCT 2D images may be acquired in a cross-sectional view which complements the en-face view which the surgeon sees through the surgical microscope. OCT 3D volume images convey more information regarding the structures and their spatial orientation and relationships than is available in the surgeon's standard view through the microscope.
As illustrated, the beam-forming optical assembly is positioned between the 2D scanners 206 and the OCT beamsplitter 212 above the shared main objective 214 of the microscope 202. The design purpose of the optical assembly is to match the size, divergence, and scanning geometry of the OCT sample arm beam to the maximum capabilities supported by the microscope 202. In
In order that the divergence of the OCT sample beam generally match the divergence of the microscope image light above the shared objective which may not be exactly parallel, and therefore provide the capability to closely match the position within the patient's retina of the OCT focus to that of the microscope, some variability should be provided in the position of either or both of lenses 210 (f5) and 208 (f6), as depicted in
In order to reduce or prevent vignetting of the OCT beam during scanning, while simultaneously making use of the entire optical aperture of the shared objective at the same time, optimal design of the beam-forming optical assembly includes provision that the OCT beam pivot through the main objective rather than scan across it. In general, this can be accomplished by designing the optical assembly in such a way as to optically image the optical scanner plane into the main objective back aperture. For the Keplerian telescope depicted in
d6=f6;d5=f5+f6;d6=f5; Eq. (1)
Furthermore, for OCT imaging of a patient's retina with increased or maximum resolution and image brightness, the OCT sample arm beam may be designed at the position of the patient's cornea to have a beam diameter which is the maximum over which a human's eye is approximately diffraction limited (typically 2-3 mm without use of adaptive optics, and up to 6-7 mm if adaptive optics are used to compensate for the eye's aberrations), and that the scanning OCT beam pivot through the patient's iris plane rather than scan across it, so that the entire available optical aperture is used for all scan widths without vignetting. The first condition on beam size at the cornea is satisfied by realizing that the lenses 226 (f2), 228 (f3), and 214 (f4) operate essentially as a Keplerian beam reducer. As a simplifying assumption, the reducing lens 228 (f3) typically has much less optical power than the main objective 214 (f4), and is located directly adjacent to it. Then, these two lenses can be considered as operating together as a single lens, with optical power given by the sum of the optical powers of the lenses individually, and located at the position of the main objective 214. According to this approximation, the main objective 214 is replaced by a modified main objective with focal length f4′ according to:
Now the design condition on the choice of lenses f2 and f4′ to ensure the correct beam size on the patient's cornea is given by:
where a1 is the desired collimated beam size on the cornea. Finally, in order to have the OCT beam pivot through the patients' iris plane rather than scan across it, the position of lens f2 should be set so that it forms a real image of the aperture of the shared main objective at the location of the patient's iris plane. Thus, the distances d1 and d2 (which equals f4′ +f2) should be set according to:
As a practical design procedure, f2 and f3 should be chosen according to Eq. (2) and (3) given the constraint the microscope imposes on f4, then d1 may be chosen according to Eq. (4).
In the foregoing, it is to be understood that the distances d4, d5, etc. correspond to the center-to-center distances between the lenses referred to. Also, it is to be understood that the relationships given in all of these design equations are simplified according to the assumption that all of the lenses act as ideal lenses which is appropriate for first-order design. For production design, professional lens design software may be used to optimize higher order performance, however following the principles described here.
In the configuration illustrated in
The computer 304 may be any suitable computing device and include a processor 308 and memory 310 for implementing surgical imaging and visualization of a surgical site and instruments using OCT in accordance with embodiments of the present disclosure. In accordance with an embodiment,
The computer 304 may operate as a controller for controlling a timing and scan pattern of scans captured by the OCT unit 200. The computer 304 may control the OCT unit 200 to capture scan patterns based on a surgical tool position and orientation. For example, the scan pattern of the B-scans may be substantially aligned with an axis of the surgical instrument. In an example, the B-scans may have a pattern with a long-axis aligned substantially with an axis of the instrument and a short-axis substantially collinear with the surgical instrument.
Referring again to
In accordance with embodiments of the present disclosure, methods for tracking OCT image capture to the location of a feature of interest in an operative field are provided. As an example, such tracking methods may be used to track a location of a feature of interest within one or more captured images. The images may be captured, for example, by the camera 302 shown in
A feature of interest may be, for example, a surgical instrument or a tissue feature within a surgical site view. In an example, a surgical instrument may include a straight edge, a color, a recognizable shape of an area of a surgical instrument, a homogenous texture, a bright area, a light source attached to a surgical instrument, and/or another identifiable feature that may be recognized by suitable analysis performed by the computer 304. The feature of interest may also be a marking of a region of interest on a surgical instrument. The computer 304 may store information about characteristics of surgical instruments for use in recognizing the feature of interest.
The method of
Methods for determining an OCT scan location include those based on determining the absolute position of a pre-set OCT scan location with respect to the field of view of the surgical microscope and attached imaging devices attached to it, as well as those based on on tracing the history of previous scan locations since an absolute determination was last made. Methods for determining the absolute OCT scan location include performing careful calibration between the OCT scanner unit and the surgical microscope field of view prior to surgery, utilizing an infrared camera mounted to the surgical microscope which is capable of directly viewing the OCT beam, and localizing the OCT scan location based on visualizing a common feature of interest in both the OCT image and the video image simultaneously.
The method of
The OCT scans acquired at a position of a feature of interest may be B-scans having a predetermined scan pattern. For example, the predetermined scan pattern may be a single B-scan such as the scan pattern shown in
In an embodiment, capture of an OCT image may be controlled based on a surgical procedure or an instrument being used during a surgical procedure. For example, the computer 304 shown in
Referring again to
In accordance with an embodiment for tracking a feature of interest using OCT,
The reference signal r(t) may represent a specified feature of a surgical instrument, which may be detected or otherwise identified by the computer 304 or any suitable instrument localizing sub-system. The computer 304 may identify a position and orientation of the instrument with sufficient resolution to direct the system to perform OCT imaging in its vicinity. In an embodiment, the computer 304 may extract the instrument position information by computerized analysis of the color video camera signal which may be configured to provide a duplicate of the surgeon's view through the surgical microscope. In an example, the video camera image may be aligned and calibrated with respect to the OCT scanner coordinate system, and control provided for such alignment to be maintained throughout surgery. Live image processing can be performed on the video image signal to recognize a feature of interest, such as a surgeon's instrument. The instrument may be configured with features that are distinct from the rest of the surgical view, such as straight edges, non-tissue colors, homogeneous texture in its body, or brightly reflective points. The computer 304 may implement a computer vision algorithm that recognizes these features and estimates the position and orientation of a tool in view. For example, such an algorithm may limit the search region by observing a limited square view in the most recently acquired image frame of the camera. In this example, a limited square is centered on the initially-estimated tool location, beginning either in the center of the field or by a user who is simultaneously operating the tracking software computer. Within the limited square, the location of the peak intensity is determined, and the limiting square is re-centered at this location for the next image frame. Because the surgical instruments may have brightly reflecting tips, the brightest part of the view will be at the tool tip in this case.
In another embodiment, an infrared (IR) sensitive video camera may be used for capturing images. As a result, visualization of the OCT illumination may be provided, rather than the visible illumination. This configuration may allow the system to verify that the beam is actually coincident with the location of interest, such as a surgical instrument tip. In this case, a computer vision algorithm may be configured to recognize the known and expected shape of the beam focused on the sample. An alternative embodiment uses an additional imaging modality which provides a higher quality en-face view of the surgical field than a color video camera, such as a scanning laser ophthalmoscope (SLO) which can be built into the OCT system for simultaneous OCT/SLO imaging. An alternative embodiment may use an aspect of the OCT imaging system itself to localize the instrument position. For example, surgical tools constructed of metal or plastic may be identified in OCT B-scans by their specific reflectance or shadowing profile. Since OCT B-scans are highly localized in the direction perpendicular to their scan direction, they would not provide any localization information in that dimension. However, the instrument may be equipped with a predetermined marking (such as a divot) which may be recognizable in an OCT B-scan such that a computer vision algorithm can distinguish between the subject tissue and features of a surgical instrument in the cross sectional view of a B-scan. To find the instrument in the view within a view such as a surgical site view, the OCT system may scan in a “search mode” until the instrument is recognized. The OCT system may determine whether a feature of interest is not contained within captured image, and capture images of a different area of a view to search for the instrument within the view in response to determining that the image is not contained within the captured image. Once the instrument is recognized, the system may enter a “locked-on” mode, such as the example method shown in
Various surgical instruments may be tracked in accordance with embodiments of the present disclosure. These instruments may be used in OCT-imaged surgical procedures as disclosed herein. Example surgical instruments that may be tracked include, but are not limited to, scalpels, forceps, scrapers, scissors, picks, vitrectomy tips, and other instruments and tools in common use in microsurgery including vitreoretinal surgery. These instruments may include a body having a predefined shape for improving capture of OCT images of nearby tissue during a surgical procedure. Examples body shapes include, but are not limited to, flat or sharp-edged shapes. Such shapes may have varying reflectance and shadowing patterns. The body of the instrument may be constructed from a variety of materials, including metals, plastics, polyamide, silicone, and composites. Other example materials include clear or tinted plastic, polymers, glass, ceramics, or other materials to allow control of the transmission and reflectance of an OCT beam. Such instruments may be compatible with intrasurgical OCT imaging as described herein and optimized for such factors as transparency to OCT light, or having localized or distributed reflecting material embedded within them for enhanced visibility with OCT. Additionally, instruments may be specifically designed or modified for compatibility with the tool localizing sub-system as disclosed herein. For example, instruments may be designed for detection by a color video camera, IR camera, or SLO image and may be specially marked in a certain pattern of one or more markings or color which uniquely identifies the instrument position and orientation in the en-face camera view. In an example, the body of an instrument may have multiple markings on its surface that have a predefined spacing for use in determining a distance in a capture OCT image including the markings. Such instruments may also be modified or designed to have a small light source or emitter, such as a light emitting diode or optical fiber tip embedded within them, so that the emitted light can be detected and localized by the en-face camera image analysis software. In an example, an instrument body may define an interior channel such that a optical fiber may be embedded within it and connected to a light source at one end. The opposing end of the fiber optic may be positioned to terminate at a pre-defined location within the surgical instrument for view from an exterior of the body such that when the light source is activated, light is emitted and viewable for tracking the instrument. Further, all or a portion of the body or the surface may be modified to selectively increase or decrease OCT transmission and reflectance, such as through abrading or diamond dusting the surface or alternatively embedding reflectors within the body to increase reflectance on OCT and increase visualization of the instrument. Modification on the surfaces can be performed to decrease reflectivity and further improve visibility of underlying structures. In an example, an instrument tip may have portions of reflective and non-reflective material to optimize the view of surrounding structures while at the same time, maintaining optimal view of the instrument for OCT control or for view by a surgeon.
The error signal e(t) may be a difference between the reference signal r(t) and output signal y(t). A controller sub-system Gc(s) may employ predetermined information about the characteristics of the instrument localizing sub-system and plant Gp(s), along with suitable feedback control algorithms, to process the error signal e(t) to produce an input signal x(t) which directs the OCT scanner unit 200, represented by the plant Gp(s), to perform OCT imaging at the location y(t) which the controller Gc(s) causes to track the feature of interest. The OCT scan unit 200 comprises the plant which is controlled by the computer 304 implementing the model of
The OCT scan unit 200 may be represented by the plant Gp(s) shown in
In the example of
Various mechanisms can be utilized for guiding an SDOCT scanner to scan a feature of interest. In an example, a smooth sub-millimeter (local) displacements of a surgical instrument on the SDOCT B-scans may be tracked. The trajectory of the surgical instrument may be predicted and its motion trajectory locked onto in a Kalman-filtering framework. An en face video camera with large field of view may be used to track and compensate for large displacements, such as when a surgeon moves the instrument a large distance very quickly.
To fuse information from multiple scans, several repeated fast sparse 3-D scans may be rapidly captured. Subsequently, captured scan that have been affected by motion artifacts may be removed. Next, the artifact-free scans may be fused, a dynamically updated high-resolution 3-D scan generated.
At each time point, of the very large number of sparse scans that the MIOCT system captures, sparsely sampled volumetric scans may be captured with a significantly lower number of B-scans than the target resolution. The image fusion time-window, i.e. the optimal number of volumes to be fused (N), may be selected based on a maximum preset threshold which can be decreased in case of scene change. Since the number of frames in each sequence (K) may be relatively small, each scan is captured very fast. For this reason, it may be assumed that some of these sequences will be less affected by the abrupt patient motion. Such sequences may be detected, reordered, and interlaced for creating a densely sampled, artifact-free representation of a surgical view, such as the retina.
In an example, a computer, such as the computer 304 shown in
In accordance with embodiments of the present disclosure, a surgical microscope system may be configured with a heads-up display (HUD) for providing surgical information to a surgeon. A HUD may integrate visualization of OCT images and/or other surgical information into the optical viewports of a surgical microscope, such as the MIOCT system 100 shown in
Information displayed by the HUD may be viewed through an ocular eyepiece unit, such as the oculars 308 shown in
Referring to the example of
It is noted that many of the examples provided herein relate to ophthalmic surgery; however, the systems, methods, and instruments disclosed herein may also be applied to other types of surgeries or any other suitable procedure. Example surgeries include, but are not limited to, neurosurgery, breast surgery, dermatologic procedures, otolaryngologic procedures, such as tympanic membrane, or any other surgeries requiring precise maneuvers to small subsurface structures (e.g., Schlemm's canal) visible on OCT or other imaging equipment.
The various techniques described herein may be implemented with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the disclosed embodiments, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computer will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device and at least one output device. One or more programs may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations.
The described methods and apparatus may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as an EPROM, a gate array, a programmable logic device (PLD), a client computer, a video recorder or the like, the machine becomes an apparatus for practicing the presently disclosed subject matter. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates to perform the processing of the presently disclosed subject matter.
Features from one embodiment or aspect may be combined with features from any other embodiment or aspect in any appropriate combination. For example, any individual or collective features of method aspects or embodiments may be applied to apparatus, system, product, or component aspects of embodiments and vice versa.
While the embodiments have been described in connection with the various embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function without deviating therefrom. Therefore, the disclosed embodiments should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.