Projected Pupil Plane Pattern (PPPP) with artificial Neural Networks. (arXiv:1905.09535v1 [astro-ph.IM])
<a href="http://arxiv.org/find/astro-ph/1/au:+Yang_H/0/1/0/all/0/1">Huizhe Yang</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Gutierrez_C/0/1/0/all/0/1">Carlos Gonzalez Gutierrez</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Bharmal_N/0/1/0/all/0/1">Nazim A. Bharmal</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Juez_F/0/1/0/all/0/1">F. J. de Cos Juez</a>

Focus anisoplanatism is a significant measurement error when using one single
laser guide star (LGS) in an Adaptive Optics (AO) system, especially for the
next generation of extremely large telescopes. An alternative LGS
configuration, called Projected Pupil Plane Pattern (PPPP) solves this problem
by launching a collimated laser beam across the full pupil of the telescope. If
using a linear, modal reconstructor, the high laser power requirement
($sim1000,mbox{W}$) renders PPPP uncompetitive with Laser Tomography AO.
This work discusses easing the laser power requirements by using an artificial
Neural Network (NN) as a non-linear reconstructor. We find that the non-linear
NN reduces the required measurement signal-to-noise ratio (SNR) significantly
to reduce PPPP laser power requirements to $sim200,mbox{W}$ for useful
residual wavefront error (WFE). At this power level, the WFE becomes 160,nm
root mean square (RMS) and 125,nm RMS when $r_0=0.098$,m and $0.171$,m
respectively for turbulence profiles which are representative of conditions at
the ESO Paranal observatory. In addition, it is shown that as a non-linear
reconstructor, a NN can perform useful wavefront sensing using a beam-profile
from one height as the input instead of the two profiles required as a minimum
by the linear reconstructor.

Focus anisoplanatism is a significant measurement error when using one single
laser guide star (LGS) in an Adaptive Optics (AO) system, especially for the
next generation of extremely large telescopes. An alternative LGS
configuration, called Projected Pupil Plane Pattern (PPPP) solves this problem
by launching a collimated laser beam across the full pupil of the telescope. If
using a linear, modal reconstructor, the high laser power requirement
($sim1000,mbox{W}$) renders PPPP uncompetitive with Laser Tomography AO.
This work discusses easing the laser power requirements by using an artificial
Neural Network (NN) as a non-linear reconstructor. We find that the non-linear
NN reduces the required measurement signal-to-noise ratio (SNR) significantly
to reduce PPPP laser power requirements to $sim200,mbox{W}$ for useful
residual wavefront error (WFE). At this power level, the WFE becomes 160,nm
root mean square (RMS) and 125,nm RMS when $r_0=0.098$,m and $0.171$,m
respectively for turbulence profiles which are representative of conditions at
the ESO Paranal observatory. In addition, it is shown that as a non-linear
reconstructor, a NN can perform useful wavefront sensing using a beam-profile
from one height as the input instead of the two profiles required as a minimum
by the linear reconstructor.

http://arxiv.org/icons/sfx.gif