Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 21 additions & 20 deletions docs/esa/euclid/euclid.rst
Original file line number Diff line number Diff line change
Expand Up @@ -319,21 +319,21 @@ and their sky coverage (in its "fov" field) is queried using ADQL_. Please note:

**Step 2:** The get_product_ method is used to download the fits file(s) listed in the "file_name" field included in the table returned in the previous step. The method returns the local path where the product(s) is saved.

**Notes:**

* Given the size of the Euclid FITS images (~1.4 GB for the MER images and ~7 GB for calibrated VIS images) downloading individual files is time consuming (depending on the internet bandwith).
.. note::
* Given the size of the Euclid FITS images (~1.4 GB for the MER images and ~7 GB for calibrated VIS images) downloading individual files is time consuming (depending on the internet bandwith).
* This step can be skipped if using ESA Datalabs_ (as direct access to the products is possible).

* This step can be skipped if using ESA Datalabs_ (as direct access to the products is possible).
.. Skip testing as the example requires a lot of time to download a huge file
.. doctest-skip::

>>> file_name = res['file_name'][0]
>>> print("Downloading file:", file_name)
Downloading file: EUC_MER_BGSUB-MOSAIC-VIS_TILE102158889-F95D3B_20241025T024806.508980Z_00.00.fits

.. doctest-skip::
.. Skip testing as the example requires a lot of time to download a huge file

>>> file_name = res['file_name'][0]
>>> print("Downloading file:", file_name)
>>> path = Euclid.get_product(file_name=file_name, output_file=file_name)


**Step 3:** Open the FITS file and inspect its content.

.. doctest-skip::
Expand Down Expand Up @@ -362,17 +362,17 @@ and their sky coverage (in its "fov" field) is queried using ADQL_. Please note:

It is also possible to download just small portions of the MER (background subtracted) images. The get_cutout_ allows to download image cutouts and store them locally - for reference, downloading a 1'x1'cutout takes less than one second and the downloaded fits file weights ~5.5 MB. In the example below, the results of Step 1 above are combined with the "file_path" and "file_name" values obtained from the mosaic_product TAP_ table to create the main input of the get_cutout_ method.

**Notes:**
This method...
.. note::
This method...

* makes use of the `Astroquery cutout service <https://astroquery.readthedocs.io/en/latest/image_cutouts/image_cutouts.html>`_ to download a cutout fits image from the Archive, and it only works for MER images. For more advanced use cases please see the Cutouts.ipynb notebook available in the Euclid Datalabs_.
* makes use of `astroquery.image_cutouts` to download a cutout fits image from the Archive, and it only works for MER images. For more advanced use cases please see the Cutouts.ipynb notebook available in the Euclid Datalabs_.

* accepts both Astropy SkyCoord_ coordinates and Simbad/VizieR/NED valid names (as string).
* accepts both Astropy SkyCoord_ coordinates and Simbad/VizieR/NED valid names (as string).

Download the cutout...

.. doctest-skip::
.. Skip testing as the example requires a lot of time to download a huge file
.. doctest-skip::

>>> file_path = f"{res['file_path'][0]}/{res['file_name'][0]}"
>>> cutout_out = Euclid.get_cutout(file_path=file_path, coordinate='NGC 6505', radius= 0.1 * u.arcmin, output_file='ngc6505_cutout_mer.fits', instrument = 'None',id='None')
Expand Down Expand Up @@ -400,11 +400,12 @@ Download the cutout...
In the Archive the 1D Spectra data is served via the the Datalink_ (a data access protocol compliant with the IVOA_ architecture) service. Programmatically, this product is accessible via the get_spectrum_ method (see the
`Access to spectra <https://s2e2.cosmos.esa.int/www/ek_iscience/Access_to_spectra.html>`_ section in the Archive help for more information about this product).

**Notes:** As it happens when accessing to other Euclid products:
.. note::
As it happens when accessing to other Euclid products:

* a two-step approach as detailed in Sect. 1.6 and 1.7 above is needed.
* a two-step approach as detailed in Sect. 1.6 and 1.7 above is needed.

* downloading of products is not needed when using ESA Datalabs_.
* downloading of products is not needed when using ESA Datalabs_.


**Step 1:** First, a list of sources that have associated spectra must be compiled. This information is available in the spectra_source table, that also includes the FITs file name and other metadata that is relevant if reading the spectra from Datalabs_:
Expand Down Expand Up @@ -501,8 +502,8 @@ package will also be available:

There are several ways to log in to the Euclid archive, as detailed below:

.. doctest-skip::
.. Skip testing as the example require authentication
.. doctest-skip::

>>> from astroquery.esa.euclid import Euclid
>>> Euclid.login_gui() # Login via graphic interface (pop-up window)
Expand All @@ -517,8 +518,8 @@ There are several ways to log in to the Euclid archive, as detailed below:
All the asynchronous jobs launched by registered users are stored in the user area, which can store up to 10 GB of jobs. Therefore, it is recommended to remove unnecessary jobs to avoid filling up the user quota.
The example below shows how to delete all the jobs in the user area using the list_async_jobs and remove_jobs_ methods.

.. doctest-skip::
.. Skip testing as the example require authentication
.. doctest-skip::

>>> Euclid.login()
>>> job_ids = [job.jobid for job in Euclid.list_async_jobs()]
Expand All @@ -529,8 +530,8 @@ It is also possible to take advantage of the job metadata to delete all the jobs

First, use the load_async_job_ method to download the metadata of the async jobs stored in the user space:

.. doctest-skip::
.. Skip testing as the example require authentication
.. doctest-skip::

>>> job_obj = [Euclid.load_async_job(jobid=jobid) for jobid in job_ids]
>>> job_ids = [job.jobid for job in job_obj]
Expand All @@ -547,8 +548,8 @@ Second, create a dataframe that contains the jobid and date information:

Finally, extract the job id's included in a given time range (in the example below, all the jobs stored since 2024-10-01 at 7 hours UTC) and delete them:

.. doctest-skip::
.. Skip testing as the example require authentication
.. doctest-skip::

>>> subset = df[(df['date'] == datetime.date(2024,10,1)) & (df['hour_UTC'].isin([7]))]
>>> jobs_to_delete = subset['job_id'].to_list()
Expand Down
Loading