The General Theory of Relativity, formulated in 1915 by Albert Einstein, revolutionized our understanding of the physical universe. It described gravity as the warping of space-time itself. The theory satisfied some initial tests, but it was evident that many of its predictions, including that of Gravitational Waves, were way ahead of its time. Experimental physicists started considering building detectors that could confirm the existence of gravitational waves only in the 1980s. Fortunately, by this time, computer scientists had developed just the tool they would need to succeed.

As General Relativity neared the end of its adolescence, in 1943, the first mathematical models for neural networks were designed. The ideas developed rapidly, and by 1985, computers could learn the statistical distribution of data. By 2002 when LIGO first started its first operations, data science was officially recognized as a separate field and formed the backbone of this emerging field of gravitational astronomy.

Today, LIGO-Virgo-KAGRA detectors generate thousands of GBs of data each day (LIGO's archive already holds the equivalent of over 1-million DVDs of data!). Analyzing this data to extract meaningful physical information from it would have been an impossible task a few decades ago. Today, these techniques have enabled physicists to interpret gravitational wave signals from colliding black holes and measure distances of the order of 1e-19! All this is made possible by our ever-increasing computational prowess and innovation in the construction of the equipment. We operate the detector in a vacuum and engineer other solutions to reduce noise (disturbances) in the signal and improve accuracy. But there's more to the utility of noise than just this.

Noise: A Physical and a Computational Viewpoint

The gravitational wave is of very low amplitude, and disturbances (termed as noise) primarily dominate the recorded signal. Noise is inherent to measurement. From the thermal agitation of the mirrors to seismic disturbances, everything can affect measurements. We need to keep tabs on the detectors' state to estimate the quantity and type of noise. We utilize this information to deal with the noise. If the noise isn't addressable, we have to discard the data. We thus collect more data in 'auxiliary channels' to keep track of such factors (like detector temperature, pressure, disturbances in the electrical supply, etc.).

As prof. Conti puts it, "Apart from the main channel that samples the photodiode at the detector output and eventually measures the passage of a Gravitational Wave, you have about 200,000 auxiliary channels that measure different quantities related to the external and internal environment of the detector. We also measure disturbances from human activities like control systems, electric mains, and even the air conditioners".

But how do we know if the noise in one of the auxiliary channels affected the primary data channel? "'Coherence' between the main channel and any of the auxiliary channels is indicative of their coupling. Such occurrences need to be avoided. However, if some residual coupling exists, it should be measured so that we can subtract the noise or at least take it into account.", explains prof. Livia. Suppose the main channel has 'coherence' with one of the auxiliary channels. In that case, that can be indicative of noise affecting the main data channel (like an alternating electrical voltage of a given frequency is applied to a circuit, we observe a current of the same frequency; similarly, we have mathematical notions to quantify the 'correlation' between 2 channels).

The data collected is then flagged according to the quality by four flags:

DATA: Good to use for detection.

CAT1: A critical malfunctioning of the instrument. Data is plagued with unusually high noise levels. About 1.7% of Hanford data and 1.0% of the time from Livingston was flagged with CAT1 in O1. In O2 (the next run), this fell to 0.001% for Hanford, 0.003% for Livingston and 0.05% for Virgo. As our detection capabilities improved.

CAT2: Some activity is observed in an auxiliary channel and a correlation between the channel and the main data channel (strain). The correlation has been understood. If we use the data without cleaning accordingly, some glitches can be observed.

CAT3: Correlation is observed but not well characterized. The data is generally not used for analysis, but we keep it as an option.

"To test the functioning of the detection pipeline, we do 'injections' of a fabricated signal, from time to time." says Dr. Stefano. While analyzing the data, we injections have to account for injections, along with the noise. The noise we see here has properties we can use to our advantage in the analysis.

Matched Filtering: Using Noise

"You need to have some knowledge of physics that governs the process so that you can simulate it and compute signal templates (expected waveforms), banks of signal templates, calculated using Numerical Relativity. Then what you do is you try to superimpose these on the data coming from the experiment until you find a match," explains Dr. Bagnasco. This method is called Matched Filtering. The process includes classifying the data into two classes (having or not having a signal). Following that, we characterize the signal deemed to be present.

We calculate 'templates' for gravitational wave events for a range of parameters. A template gives us what the signal should 'theoretically' look like (e.g., we take a collision of a black-hole with given mass 'm' and another black-hole of mass 'M', we calculate the signal template with the given masses and other parameters, using general relativity).

Then we subtract the template from the overall detected signal (the detected signal is gravitational-wave + noise; if the template represents the gravitational wave, the remaining part will just be noise.). This remaining noise is 'whitened' and then analyzed in the frequency domain. We whiten the signal by dividing the frequency distribution by the spectral density, and it is equivalent to 'normalizing' a distribution (while preserving the shape).

The frequency distribution of noise means that we see how much noise there is of each frequency (the transition of the noise from a signal in the time domain (also called a time-series) to the frequency domain, is achievable by Fourier transform, after taking a window interval). "Now the noise in the frequency domain is assumed to be 'static' and 'normally distributed'." explains Dr. Stefano. Static here means that the noise distribution across frequency doesn't change with time, which is a reasonable assumption. Secondly, we assume that the noise follows a normal distribution (a reasonable assumption, empirically speaking).

The noise after subtraction of the signal Then we check the remaining noise signal for how 'gaussian' it is. If it is gaussian then, we have isolated a gravitational wave event! The same is verified by comparing results across multiple observatories. As we'll see in the coming section, if the observations match in the various observatories, the existence of a signal is confirmed. We will now delve into how it comes together by looking at computation at the Virgo facility.

Wave Analysis at Virgo: The Computational Infrastructure

"We can segregate gravitational-wave analysis at Virgo, into 'online', 'low-latency' and 'offline'." says Stefano. Online computation covers detector control, monitoring, and data acquisition across multiple channels. "Low latency search generates 'triggers' for multi-messenger astronomy. For example, if we observe gravitational waves, we can appropriately set up the telescopes and EM-wave detectors to detect other 'messengers'.". Offline search is more extensive and is done on stored data, and it is used for detector commissioning and scientific analysis.

"In the format generally used for signal searches and parameter estimation, gravitational wave data is reduced to a 'time series' of the dimensionless 'strain' variable. It is sampled at ~16 kHz, with an associated 'state vector' of detector status information and data quality flags.", stated Dr. Bagnasco (Time series of the variable h(t) is the values it takes, listed out w.r.t. time). This data totals to a very nominal 5TB in a year. But other than this, there are 2PBs of raw data exported from Virgo to external computation and storage facilities!

"The data needs to be transferred between the observatories (Virgo and LIGO) and across the distributed computational infrastructure, at the lowest possible latency, to timely deliver 'triggers' for multi-messenger astronomy.". The low-latency processing takes place concurrently in Virgo and LIGO computational centers, "To search for transient signals, Compact Binary Coalescences (CBC) and unmodeled burst signals.", explains Dr. Stefano. If any 'gravitational wave event candidate' is found, its information is added to a database and forwarded to NASA in an almost automated fashion. A dynamic combination of pre-existing physics software ensures the above prompt transfer and maintains the integrity of the data throughout the computation.

The pipelines in the observatories and computation facilities are very 'heterogeneous' in nature (and programmed in various languages)."A lot of code has dependency issues with the environment, and many software put into use are no longer maintained." states Stefano. Thus, transferring and analyzing the data across observatories is a challenge. "As a consequence of this (the dependencies and other software issues), there is no clear boundary between middleware and the analysis program. Finally, full interoperability with the US-based infrastructure is mandatory.". This synchronization helps us confirm the observations. Many signals go under the radar as their templates aren't being utilized (e.g., gravitational wave signals from burst events). "This can also be used for gravitational wave detection, but matched filtering is more widely used, and the signal correlation between two observatories is used in parallel to matched filtering.", explains Dr. Stefano.

The scientists envisioned an informal software standard to tackle this challenge. Thus the 'International Gravitational-Wave Network' came into being. Some of the functionalities to be supported by the standard are fast and safe bulk data transfer, software packaging, data bookkeeping, managing 'jobs' (when to schedule what process), monitoring and accounting of the work, and providing consistent AAI (Authentication, Authorisation and Identity management).

"The required functionalities resemble those of software in the high energy physics domain, and so we can utilize a lot of high energy physics software in gravitational wave analysis.", Dr. Stefano points out. One of the guidelines for implementing the IGWN standard explicitly states, "Adopt the smallest possible set of mainstream, widely used tools, leveraging upon High Energy Physics experience.".

"In the IGWN architecture, software development goes through a common GitLab instance with Continuous Integration features. And the dependencies are packaged as Conda environments, and distributed through a CVMFS repository.". CVMFS (CERN VM File System) was developed by CERN and is very widely used in the High Energy Physics community. An intuitive explanation of the working can be given by understanding the case of distributed computing. When high computational power is required, we distribute the work among multiple 'worker nodes' (each node is a system doing allotted work). But the functionality has to be maintained as if the entire thing was one extensive computer/file system (we can treat the various parts of processing the same data independently, right?). CVMFS provides a file system that makes it possible to do the same, and it distributes software and frame-file data (a 'read-only' file system, running on multiple nodes).

"Rucio is the go-to tool for bulk data transfer, except for low latency needs Rucio will be utilized for data transfer needs between computing centers. For low latency needs, we utilize Apache Kafka.". Rucio managed the transfer of around 450 Petabytes of data during the ATLAS Experiment! Apache Kafka was originally developed by LinkedIn and is now made OpenSource and is widely used in industry and academia alike. Other software like StashCache and HTCondor (for workload management) are utilized to meet the required functionalities, but the IGWN architecture is still in progress.

Machine Learning for Gravitational-Wave Analysis

The current method of template matching relies on matching exact templates for their existence in the data streams. "This requires high computation power for calculating the collection of feasible signals, as well as doing the matching with each template separately. Which might at least be doable in the current scenario, but will become infeasible for projects like the Einstein Telescope.", states Dr. Stefano. Instead, we can use the template banks to train machine learning models and then use the models for detection instead of running a brute-force template matching.

In a Nature Paper from July 2021, the analysis of a whole month of Data from LIGO was done using Machine Learning methods within 7 minutes! This computation was run on the HAL computation cluster and is a significant leap from the prior processing times. But how does Machine Learning work in this situation? We use neural networks.

Machine learning algorithms improve 'performance' (e.g., accuracy) with experience (a trainable model gets better with training on more data). A neural network is a machine learning model which is modeled on the structure of the human brain. Some neural networks can find patterns, one of them being Convolutional Neural Networks (CNNs). We try to illustrate the working and simplicity of a basic CNN briefly.

Convolutional Neural Networks (CNNs) consider multiple input points and thus are widely used for feature extraction. Assume a picture (a 2D array of 'pixel inputs'), a convolution over a 3x3 grid outputs a linear combination of the nine inputs, and this is moved around the entire nxn grid, the outputs are arranged in a grid, we can see that the output has information from the pixels spatially around itself. Thus we can utilize a combination of such layers to find 'features' in the data (In practice, the layers of such a network have successfully learned features, from simple ones like horizontal and vertical lines to complex features like 'eyes' etc.).

One possible approach can be, training 1-D Convolutional Neural Networks on the banks of templates to learn the features. The first neural network runs a check for the 'classification' task to classify inputs into 'with' or 'without' gravitational wave signals. After that, a regressor neural network runs to predict the features of the signal.

Such models have successfully detected signals for which we provided them no templates during the training (like black hole collision when orbits are elliptical)! This interpolation property is helpful because the signals that come 'between' any two templates can be better detected. Secondly, such models can detect gravitational-wave events easily, even without providing such signal templates as inputs.

The Gravitational Wave Open Science Center (GWOSC) provides data from gravitational-wave observatories and access to tutorials and software tools for enthusiasts. There are many materials and repositories available, and any enthusiast can start GW analysis from scratch! As more and more people join the cause, this endeavor of understanding the universe moves leaps and bounds. As Dr. Bagnasco mentions, "We're working on improving the data analytics and computation aspects, and plan to organize hackathons for the same, to get to better solutions.". With the remarkable success of deep learning methods on the task at hand, we look forward to more breakthroughs in the coming time. There are many new horizons and possibilities with upcoming projects, like the Einstein Telescope. For now, we witness innovation propel humanity to even greater heights!