Special topic on photonics and AI in information technologies

Artificial Intelligence (AI) is revolutionizing many aspects of our society, developing a wide variety of real-life applications, from decision-making tasks, such as image classification and autonomous vehicle control, to engineering design, analysis, and manufacturing, such as inverse design. With the help of deep learning algorithms to identify recurring patterns based on previously collected data, an AI system can predict future events and make decisions.

The huge success of AI largely benefits from the rapid advances of deep neural networks and the computational complexity, which requires dedicated hardware accelerators. Matrix multiplication is an essential but computationally intensive step. To increase the performance of this step compared to Central Processing Units (CPUs), Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), and Application-Specific Integrated Circuits (ASICs) have received extensive interest to be used as AI accelerators. However, performance increases of these digital electronic solutions will eventually face limitations due to the end of Moore’s law and Dennard scaling. Here, photonics has long been recognized as a promising alternative to address the fan-in and fan-out problems for linear algebra processors.1,21. P. R. Prucnal, B. J. Shastri, I. Fisher, and D. Brunner, “Introduction to JSTQE issue on photonics for deep learning and neural computing,” IEEE J. Sel. Top. Quantum Electron. 26, 0200103 (2020). https://doi.org/10.1109/JSTQE.2020.29653842. B. J. Shastri, A. N. Tait, T. Ferreira de Lima, W. H. P. Pernice, H. Bhaskaran, C. D. Wright, and P. R. Prucnal, “Photonics for artificial intelligence and neuromorphic computing,” Nat. Photonics 15, 102–114 (2021). https://doi.org/10.1038/s41566-020-00754-y Indeed, while study reveals that the key performance limitation on the electronic processors is power consumption, where data movement (rather than processing) dominates due to RC limitation, photonics can move data relatively freely and can take advantage of the bandwidth that matches the photodetection rate.Figure 1 illustrates how the optical matrix multiplier has evolved since the 1970s with free-space optics,33. J. W. Goodman, A. R. Dias, and L. M. Woody, “Fully parallel, high-speed incoherent optical method for performing discrete Fourier transforms,” Opt. Lett. 2, 1–3 (1978). https://doi.org/10.1364/ol.2.000001 to the 1980s with fiber optics,44. M. Tur, J. W. Goodman, B. Moslehi, J. E. Bowers, and H. J. Shaw, “Fiber-optic signal processor with applications to matrix–vector multiplication and lattice filtering,” Opt. Lett. 7, 463–465 (1982). https://doi.org/10.1364/ol.7.000463 and eventually to current solutions with integrated photonics.55. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017). https://doi.org/10.1038/nphoton.2017.93 The growing capability of photonic integrated circuits (PICs) manufacturing at unprecedented levels of integration and complexity has ignited the field of photonic accelerators for AI. The current success of photonic integration stems from modern information technologies; however, the continuing innovations are stretching the limits of the established platforms, such as the crucial need for photonic nonlinear activation components that completes the full processing loop of deep neural networks in the optical domain. Undoubtedly, AI can, in turn, facilitate the design of complex photonics components and systems.As a follow-up to a previous APL Photonics special issue on photonics and AI66. K. Goda, B. Jalali, C. Lei, G. Situ, and P. Westbrook, “AI boosts photonics and vice versa,” APL Photonics 5, 070401 (2020). https://doi.org/10.1063/5.0017902 in 2020, the current 2022 APL Photonics special issue discusses the status and future perspectives on photonics and AI in IT, among other topics, highlighting both the role of photonics for AI and the role of AI for photonics.As an example of papers highlighting how upcoming AI techniques can influence the field of photonics, Chen and Dal Negro77. Y. Chen and L. Dal Negro, “Physics-informed neural networks for imaging and parameter retrieval of photonic nanostructures from near-field data,” APL Photonics 7, 010802 (2022). https://doi.org/10.1063/5.0072969 propose in an invited paper a deep learning approach based on physics-informed neural networks that can be utilized for retrieval of the optical parameters of nanophotonic structures based on non-invasive near-field imaging. Similarly, in a contributed paper, Rendón-Barraza et al.88. C. Rendón-Barraza, E. A. Chan, G. Yuan, G. Adamo, T. Pu, and N. I. Zheludev, “Deeply sub-wavelength non-contact optical metrology of sub-wavelength objects,” APL Photonics 6, 066107 (2021). https://doi.org/10.1063/5.0048139 use deep learning enabled analysis of diffraction patterns as a critical part of a deeply sub-wavelength non-contact optical metrology of sub-wavelength objects. In another contributed paper, Zhang et al.,99. T. Zhang, C. Y. Kee, Y. S. Ang, and L. K. Ang, “Deep learning-based design of broadband GHz complex and random metasurfaces,” APL Photonics 6, 106101 (2021). https://doi.org/10.1063/5.0061571 on the other hand, show how deep learning can be utilized in the design of random metasurfaces, while highlighting the pitfalls as they notice that no single universal deep convolutional neural network model works well for all the metasurfaces classes studied in their paper.In addition, several papers in this issue showcase how photonic hardware can enable either accelerators or system architectures to either speed-up or improve the energy-efficiency of core routines in AI. El Srouji et al.1010. L. El Srouji, A. Krishnan, R. Ravichandran, Y. Lee, M. On, X. Xiao, and S. J. Ben Yoo, “Photonic and optoelectronic neuromorphic computing,” APL Photonics 7, 051101 (2022). https://doi.org/10.1063/5.0072090 have written a tutorial covering architectures, technologies, learning algorithms, and benchmarking for photonic and optoelectronic neuromorphic computing engines. As an example of a neuromorphic photonic system, in an invited paper, Hejda et al.1111. M. Hejda, J. Robertson, J. Bueno, J. A. Alanis, and A. Hurtado, “Neuromorphic encoding of image pixel data into rate-coded optical spike trains with a photonic VCSEL-neuron,” APL Photonics 6, 060802 (2021). https://doi.org/10.1063/5.0048674 use off-the-shelf fiber-optic components with operation at telecom wavelengths to experimentally demonstrate how a vertical-cavity surface-emitting laser (VCSEL)-based photonic spiking neuron can encode a digital image into continuous, rate-coded (at GHz-speeds) spike trains. Lamon et al.1212. S. Lamon, Q. Zhang, and M. Gu, “Nanophotonics-enabled optical data storage in the age of machine learning,” APL Photonics 6, 110902 (2021). https://doi.org/10.1063/5.0065634 share their perspective on the critical problem of optical data storage, which is of utmost importance for many machine learning applications and hence also for the underlying hardware. Zhu et al.1313. Z. Zhu, M. Y. Teh, Z. Wu, M. S. Glick, S. Yan, M. Hattink, and K. Bergman, “Distributed deep learning training using silicon photonic switched architectures,” APL Photonics 7, 030901 (2022). https://doi.org/10.1063/5.0070711 share their perspective on distribution of deep learning training workloads by proposing a system architecture leveraging silicon photonics to accelerate deep learning training. Nevin et al.1414. J. W. Nevin, S. Nallaperuma, N. A. Shevchenko, X. Li, M. S. Faruk, and S. J. Savory, “Machine learning for optical fiber communication systems: An introduction and overview,” APL Photonics 6, 121101 (2021). https://doi.org/10.1063/5.0070838 wrote a tutorial describing how machine learning techniques can be utilized in optical fiber communication systems, not only providing a literature survey but also highlighting some promising avenues for upcoming techniques such as explainable machine learning, digital twins and physics-informed machine learning for the physical layer, and graph-based machine learning for the networking layer.Furthermore, the special issue contains a selection of papers that specifically investigate the prospects of integrated photonic hardware for AI accelerators. Singh et al.1515. J. Singh, H. Morison, Z. Guo, B. A. Marquez, O. Esmaeeli, P. R. Prucnal, L. Chrostowski, S. Shekhar, and B. J. Shastri, “Neuromorphic photonic circuit modeling in Verilog-A,” APL Photonics 7, 046103 (2022). https://doi.org/10.1063/5.0079984 address in a contributed paper the need to co-simulate both the optical and electronic components in large-scale neuromorphic photonic integrated circuits on a single platform by proposing a Verilog-A based approach and illustrating the approach for a single photonic neuron circuit. In an invited article, Yi et al.1616. D. Yi, Y. Wang, and H. K. Tsang, “Multi-functional photonic processors using coherent network of micro-ring resonators,” APL Photonics 6, 100801 (2021). https://doi.org/10.1063/5.0062865 take inspiration from the reconfigurability of field-programmable gate arrays in electronics and propose an integrated coherent network of micro-ring resonators that can emulate optical filters, optical delay lines, optical space switching fabric, high extinction ratio Mach-Zehnder interferometers, and photonic differentiation by controlling the phases in the arms of an interferometric mesh. Amin et al.1717. R. Amin, J. K. George, H. Wang, R. Maiti, Z. Ma, H. Dalir, J. B. Khurgin, and V. J. Sorger, “An ITO–graphene heterojunction integrated absorption modulator on Si-photonics for neuromorphic nonlinear activation,” APL Photonics 6, 120801 (2021). https://doi.org/10.1063/5.0062830 wrote an APL Photonics Editor’s Pick, demonstrating a novel electro-optic device, which can be used to tune the shape of the neuron nonlinearity based on an ITO-graphene heterojunction integrated absorption modulator in a Si-photonics platform. In a contributed paper, Xiao et al.1818. X. Xiao, M. B. On, T. Van Vaerenbergh, D. Liang, R. G. Beausoleil, and S. J. B. Yoo, “Large-scale and energy-efficient tensorized optical neural networks on III–V-on-silicon MOSCAP platform,” APL Photonics 6, 126107 (2021). https://doi.org/10.1063/5.0070913 use a hybrid III-V-on-silicon MOSCAP platform to illustrate how tensorized neural networks can be emulated, which require far fewer optical devices than other photonic neural architectures, leading to increased efficiencies in footprint and energy consumption. Shi et al.1919. B. Shi, N. Calabretta, and R. Stabile, “InP photonic integrated multi-layer neural networks: Architecture and performance analysis,” APL Photonics 7, 010801 (2022). https://doi.org/10.1063/5.0066350 explore in an invited paper the usage of semiconductor optical amplifiers in an InP integrated photonics platform to emulate multi-layer neural networks, achieving a doubling in computation speed and 12× improvement in energy-efficiency compared to graphics processing units (GPU). Finally, Al-Qasadi et al.2020. M. A. Al-Qadasi, L. Chrostowski, B. J. Shastri, and S. Shekhar, “Scaling up silicon photonic-based accelerators: Challenges and opportunities,” APL Photonics 7, 020902 (2022). https://doi.org/10.1063/5.0070992 determine the bounds on energy-efficiency and scaling limits for today’s silicon photonics technology and share their perspective on future research directions that would allow us to overcome these current limitations.

In conclusion, there is a fruitful cross-fertilization between the photonics and AI research communities, providing solutions to some aggressive performance targets and design challenges in currently developed IT systems. We are hopeful that the selection of articles included in this APL Photonics special issue will both inspire research communities which bottlenecks to address next and provide guidance on which IT applications can benefit from this cross-fertilization.

We are grateful for the support by APL Photonics editors Yikai Su, Editor-in-Chief Benjamin Eggleton, and editorial managers Jessica Trudeau and Jenny Stein. Furthermore, we sincerely thank all the authors who shared their valuable insights in the field in this special issue.

Conflict of Interest

The authors have no conflicts to disclose.

Author Contributions

Qixiang Cheng: Writing – original draft (equal). Madeleine Glick: Writing – original draft (equal). Thomas Van Vaerenbergh: Writing – original draft (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

REFERENCES

Section:

ChooseTop of pageREFERENCES <<1. P. R. Prucnal, B. J. Shastri, I. Fisher, and D. Brunner, “Introduction to JSTQE issue on photonics for deep learning and neural computing,” IEEE J. Sel. Top. Quantum Electron. 26, 0200103 (2020). https://doi.org/10.1109/JSTQE.2020.2965384, Google ScholarCrossref2. B. J. Shastri, A. N. Tait, T. Ferreira de Lima, W. H. P. Pernice, H. Bhaskaran, C. D. Wright, and P. R. Prucnal, “Photonics for artificial intelligence and neuromorphic computing,” Nat. Photonics 15, 102–114 (2021). https://doi.org/10.1038/s41566-020-00754-y, Google ScholarCrossref, ISI3. J. W. Goodman, A. R. Dias, and L. M. Woody, “Fully parallel, high-speed incoherent optical method for performing discrete Fourier transforms,” Opt. Lett. 2, 1–3 (1978). https://doi.org/10.1364/ol.2.000001, Google ScholarCrossref, ISI4. M. Tur, J. W. Goodman, B. Moslehi, J. E. Bowers, and H. J. Shaw, “Fiber-optic signal processor with applications to matrix–vector multiplication and lattice filtering,” Opt. Lett. 7, 463–465 (1982). https://doi.org/10.1364/ol.7.000463, Google ScholarCrossref5. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017). https://doi.org/10.1038/nphoton.2017.93, Google ScholarCrossref, ISI6. K. Goda, B. Jalali, C. Lei, G. Situ, and P. Westbrook, “AI boosts photonics and vice versa,” APL Photonics 5, 070401 (2020). https://doi.org/10.1063/5.0017902, Google ScholarScitation, ISI7. Y. Chen and L. Dal Negro, “Physics-informed neural networks for imaging and parameter retrieval of photonic nanostructures from near-field data,” APL Photonics 7, 010802 (2022). https://doi.org/10.1063/5.0072969, Google ScholarScitation, ISI8. C. Rendón-Barraza, E. A. Chan, G. Yuan, G. Adamo, T. Pu, and N. I. Zheludev, “Deeply sub-wavelength non-contact optical metrology of sub-wavelength objects,” APL Photonics 6, 066107 (2021). https://doi.org/10.1063/5.0048139, Google ScholarScitation, ISI9. T. Zhang, C. Y. Kee, Y. S. Ang, and L. K. Ang, “Deep learning-based design of broadband GHz complex and random metasurfaces,” APL Photonics 6, 106101 (2021). https://doi.org/10.1063/5.0061571, Google ScholarScitation10. L. El Srouji, A. Krishnan, R. Ravichandran, Y. Lee, M. On, X. Xiao, and S. J. Ben Yoo, “Photonic and optoelectronic neuromorphic computing,” APL Photonics 7, 051101 (2022). https://doi.org/10.1063/5.0072090, Google ScholarScitation11. M. Hejda, J. Robertson, J. Bueno, J. A. Alanis, and A. Hurtado, “Neuromorphic encoding of image pixel data into rate-coded optical spike trains with a photonic VCSEL-neuron,” APL Photonics 6, 060802 (2021). https://doi.org/10.1063/5.0048674, Google ScholarScitation, ISI12. S. Lamon, Q. Zhang, and M. Gu, “Nanophotonics-enabled optical data storage in the age of machine learning,” APL Photonics 6, 110902 (2021). https://doi.org/10.1063/5.0065634, Google ScholarScitation, ISI13. Z. Zhu, M. Y. Teh, Z. Wu, M. S. Glick, S. Yan, M. Hattink, and K. Bergman, “Distributed deep learning training using silicon photonic switched architectures,” APL Photonics 7, 030901 (2022). https://doi.org/10.1063/5.0070711, Google ScholarScitation, ISI14. J. W. Nevin, S. Nallaperuma, N. A. Shevchenko, X. Li, M. S. Faruk, and S. J. Savory, “Machine learning for optical fiber communication systems: An introduction and overview,” APL Photonics 6, 121101 (2021). https://doi.org/10.1063/5.0070838, Google ScholarScitation, ISI15. J. Singh, H. Morison, Z. Guo, B. A. Marquez, O. Esmaeeli, P. R. Prucnal, L. Chrostowski, S. Shekhar, and B. J. Shastri, “Neuromorphic photonic circuit modeling in Verilog-A,” APL Photonics 7, 046103 (2022). https://doi.org/10.1063/5.0079984, Google ScholarScitation16. D. Yi, Y. Wang, and H. K. Tsang, “Multi-functional photonic processors using coherent network of micro-ring resonators,” APL Photonics 6, 100801 (2021). https://doi.org/10.1063/5.0062865, Google ScholarScitation, ISI17. R. Amin, J. K. George, H. Wang, R. Maiti, Z. Ma, H. Dalir, J. B. Khurgin, and V. J. Sorger, “An ITO–graphene heterojunction integrated absorption modulator on Si-photonics for neuromorphic nonlinear activation,” APL Photonics 6, 120801 (2021). https://doi.org/10.1063/5.0062830, Google ScholarScitation18. X. Xiao, M. B. On, T. Van Vaerenbergh, D. Liang, R. G. Beausoleil, and S. J. B. Yoo, “Large-scale and energy-efficient tensorized optical neural networks on III–V-on-silicon MOSCAP platform,” APL Photonics 6, 126107 (2021). https://doi.org/10.1063/5.0070913, Google ScholarScitation19. B. Shi, N. Calabretta, and R. Stabile, “InP photonic integrated multi-layer neural networks: Architecture and performance analysis,” APL Photonics 7, 010801 (2022). https://doi.org/10.1063/5.0066350, Google ScholarScitation, ISI20. M. A. Al-Qadasi, L. Chrostowski, B. J. Shastri, and S. Shekhar, “Scaling up silicon photonic-based accelerators: Challenges and opportunities,” APL Photonics 7, 020902 (2022). https://doi.org/10.1063/5.0070992, Google ScholarScitation, ISI© 2022 Author(s). Published under an exclusive license by AIP Publishing.

留言 (0)

沒有登入
gif