<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[IIT Tech Ambit]]></title><description><![CDATA[IIT Tech Ambit]]></description><link>https://iit-techambit.in/</link><generator>Ghost 3.36</generator><lastBuildDate>Wed, 08 Apr 2026 10:17:41 GMT</lastBuildDate><atom:link href="https://iit-techambit.in/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Sharkskin Revolution: Redesigning the future]]></title><description><![CDATA[We take a deep-dive into the innovative world of Sharkskin, its uses and ingenious design. It is a cutting-edge technology that has the potential to be a revolution in the world of sports, biomedicine, automobiles and transport.]]></description><link>https://iit-techambit.in/sharkskin-revolution-redesigning-the-future/</link><guid isPermaLink="false">69020411a1df0805311e2f1e</guid><dc:creator><![CDATA[Anirban Pal]]></dc:creator><pubDate>Fri, 14 Nov 2025 15:45:49 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/11/Screenshot-2025-11-14-at-9.15.24-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/11/Screenshot-2025-11-14-at-9.15.24-PM.png" alt="Sharkskin Revolution: Redesigning the future"><p>Imagine a swimsuit that makes you glide through water like a shark — faster, smoother, and with less effort. Sounds like something out of a sci-fi movie, right? Well, after a fun collaboration with <em>Prof. Shivendu Ranjan, Nano Bio Research Lab, Dept. Of Nano Science and Technology, IIT Kharagpur</em>, we believe this is a real possibility. Here, we discuss one of the coolest examples of biomimicry — the art of taking inspiration from nature to solve human problems.</p><p><strong>What’s So Special About Sharkskin?</strong></p><p>According to an article published by <em>Royal Society Publishing Organisation</em>, Sharks are built for speed. Their skin isn’t smooth at all — it’s covered with tiny, tooth-like scales called dermal denticles. These denticles create tiny grooves along the shark’s body that reduce drag and turbulence in the water. These denticles have pulp and cavity covered with <em>enamel</em> or <em>vitrodentine</em>. Micro/Nano hierarchical structures on the <strong>Sharkskin</strong> have been discovered and <em>2–7</em>% drag reduction rate observed due to <em>Microgroove Riblet</em> like structure.</p><p><strong>How It Actually Works</strong></p><p>It solves one of nature's most fundamental problems. Friction drag is the most basic type of fluid drag where the water right next to the body moves very slowly (almost stuck), the water farther away moves faster. This difference in speed creates the drag. The <strong>Sharkskin</strong>’s microscopic ridges do something special! They control the boundary layer, keeping the faster-moving water from mixing too much with the slower layer close to the skin, hence reducing drag.</p><p>Here's an interesting fact: During the 2008 Beijing Olympics, swimmers wearing <strong>Sharkskin</strong>-inspired suits broke more than 90 world records! The suits were so effective that <em>the International Swimming Federation (FINA)</em> eventually had to ban full-body versions to keep the competition fair. Talk about performance bordering on cheating!</p><p>Many Television Series have explored the idea of applying Nanomaterials Biomimicry such as the 'Iron Man' Suit or Memory Chips from 'Black Mirror', but none have applied the possibility of a cutting-edge<strong> Sharkskin</strong> based Outfit. Viewers will get inspired by the awesomeness of this technology only if they notice something similar worn by the heroes they idolise.</p><p><strong>Technologies to make it Industry Specific</strong></p><p>When scientists decided to copy <strong>Sharkskin</strong>, the challenge was clear: "How do we make millions of microscopic grooves just like a shark’s skin on man-made materials?” Some methods discussed by <em>Prof. Ranjan </em>include <strong>Laser based</strong> <strong>micro-etching</strong> or <strong>lithography</strong>, like making computer chips. They engrave minute ridges and grooves using lasers. Another is <strong>Nanoparticle-based paints</strong> or <strong>polymers</strong>, applied on the surface. When they dry, they auto-organize to produce riblet patterns like <strong>Sharkskin</strong>. Industries have full liberty to use any other method and innovation to design such Swimsuits with more emphasis on R&amp;D.</p><p><strong>What Challenges associated?</strong></p><p>He also highlighted some of the major challenges associated with replicating it. The biggest challenge is reproducing <strong>Sharkskin</strong>’s microscopic patterns on a large, industrial surface. Most of the materials come in the Nano/Micro Scale and the accuracy along with compatibility with the user wearing it needs to be spot on. Surface abrasion and loss of sensitivity as well as <em>Biofouling </em>(marine organism buildup) lead to decrease in effectiveness. The agonising aspect is regulation and standardization whereby they must undergo rigorous safety and performance certifications.</p><p>An interesting observation is that the grooves inside the material guide air or rainwater in ways that keep dust and dirt from settling. So, building materials that have self-cleaning property, can also be applied in architecture and Medical Technology. <em>Sharklet</em>, a surface pattern used on medical devices reduces bacterial growth without using antibiotics or chemicals. Now, that is Real innovation!</p><p><strong>Sharkskin</strong> didn’t just help sharks swim faster, it helped humans reimagine the way we move, build, and protect our civilization. We have applied the idea on many technologies and real-world applications. This has motivated building of more efficient Ships, Aeroplanes, Motorbikes and other transport phenomena.</p><p><strong>Conclusion</strong></p><p>Limitless possibilities arising out of a simple idea has proved our resourcefulness from time to time. Innovation such as these have allowed us to create solutions that are smarter, greener, and more efficient. A new generation of enthusiastic researchers have a responsibility to go deeper into the intricacies of such a revolutionary technology. A technology that can very easily redefine how we swim, fly, and move across the world. In fact, it probably is nature's way of telling us “<em><strong>The answers have always been here. You just had to look closer.</strong></em>”</p>]]></content:encoded></item><item><title><![CDATA[The Ice Age and the Human Age]]></title><description><![CDATA[North and South pole of our Earth are the places were most of the ice on the planet is present. But, why ? There can be many potential answers. This article covers one of those by taking you back to the age of ice OR The Ice Age.]]></description><link>https://iit-techambit.in/ice-age/</link><guid isPermaLink="false">68467f6aa1df0805311e2da8</guid><dc:creator><![CDATA[Manas Kumar Gautam]]></dc:creator><pubDate>Fri, 24 Oct 2025 09:08:07 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/10/cover.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/10/cover.png" alt="The Ice Age and the Human Age"><p>Have you ever wondered why there are “caps” at the North and the South poles of Earth? Why is there so much ice there? The reasons for such things lie in the history of Earth. Over the years, geologists have divided the past of Earth into various divisions, and we call it the Geological Time Scale. <br>One of the ways to divide the history of Earth is by using the geological features present on Earth, such as the sedimentary rock layers. Every rock layer corresponds to a change in the Earth's environment and gives an idea of what has happened in the past. These layers are also known as boundaries that sometimes mark the end or beginning of an era.<br>For example, we have K – Pg boundary (the boundary that marks the end of dinosaurs) that marks the end of the Cretaceous Period, the last period of the Mesozoic Era, and marks the beginning of the Paleogene Period, the first period of the Cenozoic Era. By exploring the landforms, sediments, and fossils of the most recent period in the geologic time scale, spanning from about 2.58 million years ago to the present, the Quaternary Period, we can identify glacial periods of severe cold climate when great ice sheets formed in the high middle latitudes of the northern hemisphere and glaciers and ice caps advanced in mountain regions around the world. This is one of the clues to finding the answer to our question, and these clues lead us to an important age. So, be ready with a blanket and a heater to experience the recent chapter in Earth’s history: The Ice Age.</p><h2 id="how-was-our-earth-during-the-ice-age"><strong>HOW WAS OUR EARTH DURING THE ICE AGE</strong></h2><p>Over the past, the Earth has undergone various Ice Ages or glacial periods -  Great Ice Ages, Little Ice Ages, some of which also involve partial freezing of the Earth (one more clue to the answer to our question!).</p><p>The most recent major ice age, known as the Great Ice Age or Pleistocene Epoch, spanned approximately 2.6 million to 11,700 years ago. Extensive ice sheets and glaciers formed and retreated in a series of glacial and interglacial (a period of milder climate between Ice Ages) cycles during this epoch, shaping much of the Earth’s surface as we know it today.</p><p>The Great Ice Age, a recent chapter in the Earth's history, was a period of recurring widespread glaciations. During the Pleistocene Epoch of the geologic time scale, which began about a million or more years ago, mountain glaciers formed on all continents, the icecaps of Antarctica and Greenland were more extensive and thicker than today, and vast glaciers, in places as much as several thousand feet thick, spread across northern North America and Eurasia. So extensive were these glaciers that almost a third of the present land surface of the Earth was intermittently covered by ice. Much has been learned about the Ice Age glaciers because evidence of their presence is so widespread and because similar conditions can be studied today in Greenland, in Antarctica, and in many mountain ranges where glaciers still exist. It is possible, therefore, to reconstruct in large part the extent and general nature of the glaciers of the past and to interpret their impact on the physical and biological environments.</p><h2 id="climate-change-and-ice-age"><strong>CLIMATE CHANGE AND ICE AGE</strong></h2><p>It is also important to recognise that the ice age isn’t just about advancing and retreating ice sheets. Major environmental changes also took place in the Mediterranean region and in the tropics. The Sahara, for example, became drier, cooler, and dustier during glacial periods, yet early in the present interglacial it was a mosaic of lakes and oases with tracts of lush vegetation. A defining feature of the Quaternary Period is the repeated fluctuation in climate as conditions shifted from glacial to interglacial, and back again, during the course of the last 2.5 million years or so.</p><p>Willi Dansgaard (1922–2011) was the first scientist to demonstrate that the ice sheets themselves provided an extended record of Earth’s climate history. Dansgaard was interested in oxygen isotope ratios in rainfall, snow, and ice. He made the landmark discovery that the oxygen isotope profile in ice cores provided a long-term record of changing air temperature in the Polar regions. He was able to show that as air temperature falls, more molecules of H2O containing the heavy (oxygen-18) isotope condense and are lost from clouds as rain and snowfall. Thus, atmospheric water vapor becomes more and more depleted of 18 - O in a poleward direction. In 1966, the Americans obtained a 1,390 m ice core from Camp Century—the first ice core to penetrate the Greenland ice sheet down to bedrock. Recent work on Greenland ice cores has allowed the end of the Pleistocene epoch and the onset of the Holocene interglacial to be dated very precisely to 11,700 years before AD 2000.</p><p>As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide and methane. A Swiss physicist, Hans Oeschger (1927–98), made fundamental contributions to our understanding of ice age climate change. He pioneered the measurement of greenhouse gases in the bubbles trapped in ancient ice. In his laboratory at the University of Bern, Oeschger analysed many thousands of samples from Greenland and Antarctica. In 1979, his team was the first to show that CO2 concentrations during glacial stages were almost half those of the present. Note how the changes in temperature closely track the changes in methane and CO2. Methane is a potent greenhouse gas—it is stored in large volumes in the frozen biomass of the permafrost and as methane hydrate within sediments beneath the ocean floor. Ice core data have been fundamental in demonstrating that changes in the composition of the atmosphere played a key role in the shifting climates of the Quaternary, but there is still much debate about the processes involved and the leads and lags. The glacial and interglacial shifts during the Quaternary period can be explained by the Milankovitch Cycles using the link of CO2 exchange between the oceans and atmosphere. So, what is the Milankovitch Cycle really?</p><h2 id="the-milankovitch-cycles-and-ice-age-">THE MILANKOVITCH CYCLES AND ICE AGE:</h2><p>A brief discussion on the Ice Age should involve a discussion on Milankovitch Cycles, and this concept can solve the question that we are trying to solve, just from the start of this article. Milutin Milankovitch hypothesised that the long-term, collective effects of changes in Earth’s position relative to the Sun are a strong driver of Earth’s long-term climate and are responsible for triggering the beginning and end of Ice Ages. Specifically, he examined how variations in three types of Earth orbital movements affect how much solar radiation (known as insolation) reaches the top of Earth’s atmosphere, as well as where the insolation reaches. These cyclical orbital movements, which became known as the Milankovitch cycles, cause variations of up to 25 per cent in the amount of incoming insolation at Earth’s mid-latitudes (the areas of our planet located between about 30 and 60 degrees north and south of the equator). The Milankovitch cycles include the shape of Earth’s orbit, known as eccentricity. The angle Earth’s axis is tilted with respect to Earth’s orbital plane, known as obliquity; and the direction Earth’s axis of rotation is pointed, known as precession. Milankovitch combined the changes in each of these over the years due to various factors in our solar system to create a comprehensive mathematical model for calculating differences in solar radiation at various Earth latitudes along with corresponding surface temperatures. The model is sort of like a climate time machine: it can be run backwards and forward to examine past and future climate conditions. He calculated that Ice Ages occur approximately every 41,000 years. Subsequent research confirms that they did occur at 41,000-year intervals between one and three million years ago. But about 800,000 years ago, the cycle of Ice Ages lengthened to 100,000 years, matching Earth’s eccentricity cycle. While various theories have been proposed to explain this transition, scientists do not yet have a clear answer.</p><h2 id="milankovitch-cycles-can-not-explain-earth-s-current-warming-">MILANKOVITCH CYCLES CAN NOT EXPLAIN EARTH’S CURRENT WARMING:</h2><p>Milankovitch cycles can’t explain all the climate change that has occurred over the past 2.5 million years or so. More importantly, they cannot account for the current period of rapid warming Earth has experienced since the pre-Industrial period (the period between 1850 and 1900), particularly since the mid-20th century. Scientists are confident Earth’s recent warming is primarily due to human activities — specifically, the direct input of carbon dioxide into Earth’s atmosphere from burning fossil fuels. Milankovitch cycles operate on long time scales, ranging from tens of thousands to hundreds of thousands of years. In contrast, Earth’s current warming has taken place over time scales of decades to centuries. Over the last 150 years, Milankovitch cycles have not changed the amount of solar energy absorbed by Earth very much. In fact, NASA satellite observations show that over the last 40 years, solar radiation has actually decreased somewhat. Finally, Earth is currently in an interglacial period. If there were no human influences on climate, scientists say Earth’s current orbital positions within the Milankovitch cycles predict our planet should be cooling, not warming, continuing a long-term cooling trend that began 6,000 years ago. And there we are! The reason why there are ice caps at the North and South poles of Earth is that they are a part of the cooling cycle that was started 6,000 years ago!</p><p>In the above paragraph it is clear that if there was no human intervention then our planet should be cooling because of a cooling cycle, which takes us to a serious issue that our activities on Earth can even alter such big climatic conditions - our interventions on this planet can change the cycle from cooling to warming and this is a global concern because warming the planet is melting the polar ice caps which in turn leads to increase in water level of oceans that brings tremendous disaster in form of floods, tsunamis etc. It’s time to do something about this growing concern of melting icecaps. There are multiple research and startup-led efforts experimenting with different techniques to refreeze or preserve Arctic ice. Sustainable development is the key to making the required balance. All of us should practice sustainable development and inform those who are not practicing it. It’s time to come together as a whole, like we did during the concern of the depleting ozone layer.<br></p>]]></content:encoded></item><item><title><![CDATA[Quantum Precision: The Untold Story of Metrology in India]]></title><description><![CDATA[Behind every accurate measurement, lies a silent story of metrology- the science of measurement. From calibration of IST to quantum metrology, this article dives into the fascinating world of metrology in India. Based on an exclusive interview with Dr. Venugopal Achanta, the director of CSIR-NPL.]]></description><link>https://iit-techambit.in/quantum-precision-the-untold-story-of-metrology-in-india/</link><guid isPermaLink="false">686a8134a1df0805311e2db4</guid><dc:creator><![CDATA[Sriraman]]></dc:creator><pubDate>Fri, 24 Oct 2025 09:05:25 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/10/cold--smooth---tasty.-1-cover-page.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/10/cold--smooth---tasty.-1-cover-page.png" alt="Quantum Precision: The Untold Story of Metrology in India"><p></p><p>Whenever we make a measurement, be it from a ruler, thermometer or a weighing scale, we tend to blindly trust the measurement device. But how do we know that it is accurate? </p><p>Whether you're filling your fuel tank, relying on GPS for navigation, or trusting your blood pressure monitor, a silent guardian ensures the accuracy of it all — metrology, the science of measurement. At the forefront of this in India is the <strong>CSIR-National Physical Laboratory (NPL)</strong>, led by its current Director, <strong>Dr. Venu Gopal Achanta</strong>. Often overlooked outside the scientific community, metrology plays a crucial role in India’s economic, industrial and technological ecosystem.</p><p>This article dives into the significance of metrology in India, based on an in-depth interview with Dr. Venu Gopal, bringing out the hidden stories, challenges, and future of India’s national measurement framework.<br></p><h3 id="npl-india-s-backbone-of-accuracy"><strong>NPL: India’s Backbone of Accuracy</strong></h3><p>As India’s only <strong>National Measurement Institute (NMI)</strong>, the NPL serves as the ultimate authority for all measurement standards in the country. NPL is responsible for the <strong>realization, and dissemination of the SI units,</strong> such as the kilogram, metre, second, and kelvin. This forms the foundation of accurate and consistent measurements across sectors. It takes care of the testing and calibration of a wide array of instruments, ranging from industrial machinery and laboratory equipment to imported devices.</p><p>This role is of paramount importance across industries. For instance, in pharmaceuticals, minute deviations in the lab apparatus can affect drug efficacy and safety. In aerospace industries, sub-millimeter accuracy can be the difference between a successful launch and a catastrophic failure. In each case, the industrial equipment must be calibrated to NPL’s national standards, directly by NPL or indirectly through accredited laboratories.</p><p>Quality infrastructure for industries is voluntary across the world. However, there is a <strong>legal metrology framework</strong> under which the legal requirements for measuring instruments are enforced. NPL works with the legal metrology team to maintain <strong>traceability</strong>, meaning that the certification of the equipment is traceable to NPL even from the consumer level. <br><br>During COVID-19, NPL was one of the few laboratories that remained operational. It tested and calibrated critical devices like blood pressure monitors and digital thermometers. “We designed the digital thermometers, made them in house and transported them to various places,” adds Dr. Venu Gopal.</p><p>NPL’s role goes far beyond fundamental research; it’s deeply tied to industry and innovation. “The ‘I’ in ‘CSIR’ stands for ‘Industry,’ and we cater to about 4000 industries under metrology,” explains Dr. Venu Gopal. These include <strong>service projects, consultations, and testing for specific applications</strong>. For example, during COVID-19, numerous companies approached NPL with air purifier designs. NPL tested their safety, filtration, and noise levels, and helped redesign and improve the devices.</p><p>NPL also plays a vital role in India’s self-reliance<strong> (Atmanirbhar Bharat)</strong> by ensuring that products made in India can be <strong>calibrated, certified, and quality-tested within the country</strong> itself. This is achieved by developing <strong>Certified Reference Materials (CRMs) </strong>indigenously, which are standardized to Indian conditions of temperature, humidity etc. By doing so, NPL helps prevent the import of foreign reference materials, saving crores of rupees annually. While "Make in India" is important, the Director emphasizes that quality assurance matters even more, and metrology is what guarantees that quality. <br></p><h3 id="ist-and-the-art-of-measuring-time"><strong>IST and the Art of Measuring Time</strong></h3><p>Many would recall the series of "beeps" from the All India Radio (Akashvani) that announced the strike of the hour. What few know is that this was generated by NPL, which has been calibrating Indian Standard Time (IST) since the 1940s.</p><p>Today, the most common way to access IST is via the <strong>Network Time Protocol (NTP)</strong>. This protocol allows computers to ping specific servers (like ntp.nplindia.res.in) and synchronize their clocks with IST using just a few kilobytes of data.</p><p>NTP provides millisecond-level accuracy, which is sufficient for daily tasks and internet synchronization. For more precise needs, <strong>Precision Time Protocol (PTP)</strong> is used, offering <strong>microsecond-level accuracy</strong>. At the highest level, space organizations like ISRO use <strong>Two-Way Satellite Time and Frequency Transfer (TWSTFT)</strong>, which enables synchronization at the <strong>nanosecond level</strong>, critical for space missions.</p><p>The international body which generates the <strong>Coordinated Universal Time (UTC)</strong> is the<strong> BIPM </strong>(Bureau of International Weights and Measures). There are about 400 high-precision clocks from 90 locations around the world, which provide their data to BIPM. To minimize the impact of fluctuations, an ensemble average of the time signals is computed to generate the UTC. NPL contributes to this global ensemble and in return, receives monthly UTC data from BIPM. Using this information, NPL "steers" IST, adjusting its atomic clocks to remain tightly synchronized with UTC.</p><p>“Every 15 minutes, a signal is shared to BIPM via satellite to maintain high accuracy,” explains Dr. Achanta. NPL uses an ensemble of different atomic clocks like Cs fountain clocks, Rb clocks and H masers. By using a weighted average approach, NPL combines the outputs of all these clocks to generate IST that is both accurate and long-term stable.</p><p>This continuous process known as "<strong>clock seeding</strong>" involves regular calibration, synchronization, and correction. Recently, NPL has achieved a <strong>remarkable precision of just 1 nanosecond</strong> uncertainty in IST. <br></p><h3 id="the-shift-toward-quantum-metrology-standards"><strong>The Shift Toward Quantum Metrology Standards</strong><br></h3><p>Previously, the 7 SI units were based on physical prototypes (like a metal cylinder for kg) or properties of materials (like the triple point of water for Kelvin). But in 2019, the SI units were<strong> redefined based on fundamental constants of nature</strong> resulting in consistency and ultra-precise measurements. For example, the kilogram has been redefined in terms of the Planck constant. NPL has developed the ‘Kibble balance’ to measure kilogram to microgram level accuracy- a pursuit that took around 20 years of continual refinement. </p><p>The new SI system is largely based on the fixed physical laws, which paved the way for the quantum metrology standards. Quantum metrology uses principles of quantum mechanics to make <strong>ultra-precise measurements</strong>. Unlike traditional instruments, quantum metrology devices do not require external calibration as they work based on invariable quantum effects. This makes them capable of <strong>self-calibration</strong>. Scientists at NPL are currently working on designing and developing such robust quantum standards in India.<br></p><h3 id="environmental-metrology"><strong>Environmental Metrology</strong></h3><p>NPL has recently set up Asia’s first calibration and certification facility for environmental monitoring equipment — crucial for accurate air quality indices and reporting of emissions. Furthermore, NPL operates a <strong>solar cell calibration laboratory</strong>, which is <strong>one of only five such recognized labs in the world</strong>. It is particularly notable for offering the lowest uncertainty in measuring short-circuit current, a key parameter in assessing the efficiency of solar cells.</p><p>Over the last three years, India has made a major leap in acoustic standards by becoming <strong>one of only three countries in the world to establish a full-range national standard</strong> <strong>for sound</strong> frequencies, covering 2 Hz to 20 kHz. This has applications in noise mapping, which helps assess and monitor noise pollution across cities. For the first time since the 1960s, NPL is conducting <strong>noise mapping of Delhi-NCR</strong>. According to India's environmental noise standards, any sound above <strong>50 decibels (dB)</strong> is considered noise pollution. But today, even the quietest areas in Delhi-NCR exceed this threshold. The new data provided by NPL is already influencing policy decisions and triggering the formation of regulatory committees.</p><p>Other than that, NPL is also constantly contributing to Sustainable Development Goals (SDGs), through their research on clean water and sanitation, climate action etc.</p><h3 id="global-engagements-and-the-current-challenges"><strong>Global Engagements and the Current Challenges</strong></h3><p>International collaboration and consensus play a huge role in metrology. Dr. Venu Gopal, as a member of the <strong>International Committee for Weights and Measures (CIPM)</strong> and <strong>The International Union of Pure and Applied Physics (IUPAP), </strong>has been representing India in the global forum. Having been elected to CIPM after 23 years from India,<strong> </strong>Dr. Venu Gopal has brought India to the forefront in the international engagements relating to metrology. He emphasizes on how such representation ensures that India’s perspective is heard on important global matters. For example, when a consultative committee was proposed to discuss environmental issues, India had advocated for the inclusion of third world countries in the same.</p><p>In many developed nations, there is seamless coordination between multiple institutions such as the metrology department, space agencies, and the telecom sector, allowing for collaboration, innovation and development. In India, however, achieving such coordination between multiple organisations becomes challenging. For example, India recently voted in favor of removing the leap second correction in global timekeeping standards, a critical decision discussed at CIPM. But implementing this change and translating it to telecom industries remains a challenge.</p><blockquote>“The economy of a country is reflected directly in its metrology. Any top tier economy has a well-established metrology programme,” says Dr. Venu Gopal. </blockquote><p>Countries like China and the United States invest millions of dollars into metrology annually, through institutions like NIST (National Institute of Standards and Technology). These investments not only improve precision in technology but also fuel industrial growth, thus boosting the economy. In contrast, India faces both <strong>financial constraints and administrative challenges.</strong></p><p>Even under the National Quantum Mission (NQM), NPL did not receive funding, despite requests. Unlike countries where institutes are empowered to share resources and collaborate freely, NPL still faces challenges in funding and inter-agency coordination.<br></p><h3 id="the-road-ahead"><strong>The Road Ahead </strong></h3><p>Looking ahead, Dr. Venu Gopal believes that the future of Indian metrology lies in recognizing its foundational role.  Leaders of scientific institutions must understand the importance of metrology and not view NPL as just another research institute, responsible for publishing papers in high-impact journals.  While scientific publishing is a part of NPL’s work, its broader responsibility lies in <strong>supporting the national metrology framework and enabling industries</strong> through precision science.</p><p>The Director underscores the importance of having <strong>state-of-the-art basic R&amp;D facilities</strong>. A striking example came a few years ago, when a Korean group announced the discovery of room-temperature superconductivity — a potential scientific breakthrough. Within just ten days, NPL was able to reproduce the reported material, independently test it, and scientifically refute the claim, thanks to its well-maintained measurement systems. Over the next decade, NPL aims to expand these capabilities, develop quantum metrology standards, and modernize the equipment being used.</p><hr><p>‘If you cannot measure it, you cannot control it’ said Lord Kelvin. Such is the profound importance of measurement in a world driven by innovation and development.  In this aspect, NPL has been a cornerstone to the nation’s metrology.</p><p>As India strides forward in space exploration, advanced manufacturing, green energy, and digital innovation, the demand for precision and reliability has never been higher. From calibrating the instruments that power our electric grids to developing quantum-based standards, NPL stands at the crossroads of science and industry.</p><p>The future of Indian science, industry, and policy will be measured, quite literally, by how well we continue to invest in and support this silent sentinel of accuracy.<br><br></p>]]></content:encoded></item><item><title><![CDATA[Parker Solar Probe: On A Mission To "Touch The Sun"]]></title><description><![CDATA[The article presents a deep insight into the Parker Solar Probe mission launched by NASA to study the Sun like never before in human history.]]></description><link>https://iit-techambit.in/parker-solar-probe/</link><guid isPermaLink="false">679dcc37a1df0805311e2a67</guid><dc:creator><![CDATA[Manas Kumar Gautam]]></dc:creator><pubDate>Sat, 26 Apr 2025 17:17:01 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/04/parker-solar.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/04/parker-solar.png" alt="Parker Solar Probe: On A Mission To "Touch The Sun""><p>On the night (EST) of Dec 26, 2024, the mission operations team at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, received the signal of Parker Solar Probe (PSP). The team was out of contact with the spacecraft during closest approach, which occurred on Dec. 24, with Parker Solar Probe zipping just 3.8 million miles from the solar surface while moving about 430,000 miles per hour. Thus confirming its closest approach to the sun and its safety. NASA’s Parker Solar Probe is designed to revolutionize our understanding of the Sun. Parker solar probe has become the first spacecraft to experience the sun to just a tiny distance. Over seven years, the spacecraft will complete 24 orbits around the Sun. The features and achievements of PSP are not limited to here. So, let’s discover these.</p><p>Parker Solar Probe was launched on August 12, 2018, from Cape Canaveral Air Force Station, Florida. The mission was designed and managed at NASA’s Goddard Space Flight Center, Greenbelt, Maryland, USA, and Johns Hopkins University Applied Physics Laboratory, Baltimore, Maryland, USA.</p><blockquote><strong>Since the mission is designed for a massive celestial body, its achievements are also huge.</strong></blockquote><blockquote>At closest approach, Parker Solar Probe hurtles around the Sun at approximately 194 thousand meters per second (for example, it would take only 18.5 seconds from Srinagar to Kanyakumari if we travel at this speed).</blockquote><blockquote>Parker Solar Probe was the first NASA mission to be named for a living person, Dr. Eugene Parker, and the first person to predict the existence of the solar wind. In 1958, Dr. Parker developed a theory showing how the Sun’s hot corona – now known to be millions of degrees Fahrenheit – is so hot that it overcomes the Sun’s gravity. According to the theory, the material in the corona expands continuously outwards in all directions, forming a solar wind. Dr. Parker watched the launch with the mission team from Kennedy Space Center in Florida on Aug. 12, 2018. Dr. Parker died on March 15, 2022, at age 94.</blockquote><blockquote>Parker Solar Probe has become the first spacecraft to approach the sun to just a small distance of 3.8 million miles. (Distance may seem significant, but if the sun and earth were 1 m apart, then Parker Solar Probe is only four c. m. away from the sun.)</blockquote><p><strong>DESIGN AND PROTECTION:</strong></p><p>PSP weighs about 685 kilograms, which is relatively light for a spacecraft, but it launched into space aboard one of the most powerful rockets in the world, the United Launch Alliance Delta IV Heavy. That’s because it takes much energy to go to the Sun – 55 times more energy than it takes to go to Mars. Another astonishing fact about PSP is why it isn’t melting. One key to understanding what keeps the spacecraft and its instruments safe is understanding the concept of heat versus temperature. In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. This is because temperature measures how fast particles move, whereas heat measures the total amount of energy they transfer. Particles may be moving fast (high temperature), but if there are very few, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft. The corona (heading towards it in the next few lines) through which Parker Solar Probe flies, for example, has an extremely high temperature but very low density. Compared to the visible surface of the Sun, the corona is less dense, so the spacecraft interacts with fewer hot particles and doesn’t receive as much heat. That means that while Parker Solar Probe will be traveling through a space with temperatures of several million degrees, the surface of the heat shield that faces the Sun will only get heated to about 1,400 degrees Celsius. But thousands of degrees Celsius is still too hot. For comparison, lava from volcano eruptions can be anywhere between 700 and 1200 degrees Celsius.  To withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 2.4 meters in diameter and about 115 mm thick and can withstand temperatures reaching nearly 1,377 degrees Celsius. With those few inches of thickness just on the other side of the shield, the spacecraft body will sit at a comfortable 30 degrees C.</p><p><strong>OBJECTIVES:</strong></p><p>Having such a fantastic protection system, PSP can survive in such harsh conditions of the Sun. But why has NASA sent PSP to such harsh conditions? The answer to this question points to many unsolved mysteries about our nearest star that are yet to be solved. These include:</p><p>CORONAL HEATING PROBLEM:</p><p>The corona is the outermost part of the Sun's atmosphere. The corona is usually hidden by the bright light of the Sun's surface. That makes it difficult to see without using special instruments. However, the corona can be seen during a total solar eclipse. The corona reaches extremely high temperatures. However, the corona is very dim because it is about 10 million times less dense than the Sun’s surface. This low density makes the corona much less bright than the surface of the Sun. The temperature of the corona is more than 1.1 million degrees Celsius, whereas down 1610 K.M., the temperature is about 5,537 degrees Celsius. How the sun manages these figures is always a big mystery in front of scientists. They call it a Coronal Heating Problem. PSP, flying itself in the corona, will seek clues to its behavior and offer the chance for scientists to solve this mystery.</p><p>SOLAR WIND:</p><p>NASA’s PSP will answer one of the important questions in the field of solar science. What is the origin of the solar wind, and how is it accelerated to speeds of up to 1.8 million miles per hour? The solar wind streams off of the Sun in all directions at speeds of about 400 km/s (about 1 million miles per hour). The source of the solar wind is the Sun's hot corona. It emanates from features of the Sun, such as dark and cool regions called coronal holes and active regions, which are characterized by strong magnetic fields. These regions release solar wind at different speeds and densities. But all release the same basic components of solar wind — electrically charged particles such as protons and electrons. The temperature of the corona is so high that the Sun's gravity cannot hold onto it. This wind fills our entire solar system. When gusts of solar wind arrive at Earth, they can set off beautiful aurora — but at the same time, expose astronauts to radiation, interfere with satellite electronics, and disrupt communications signals like GPS and radio waves. The more we understand the fundamental processes that drive the solar wind, the more we can limit some of these effects.</p><p>NEAR LIGHT SPEED PARTICLES:</p><p>Parker Solar Probe also is studying how some particles accelerate away from the Sun at mind-boggling speeds – more than half the speed of light, or more than 140 million meters per second! These particles move so fast that they can reach Earth in under half an hour, so they can interfere with electronics on board satellites with very little warning. Thus, studying these particles will help us to prepare for future disasters and to preserve our technology.</p><p>According to NASA, PSP has completed 22 orbits around the sun. NASA has planned 24 orbits of PSP around the sun. Parker Solar Probe’s next two close passes of the Sun, at approximately the same distance and speed, will occur on March 22, 2025, and June 19, 2025. PSP has contributed significantly to our understanding of our nearest star. It is the first time we are sampling the sun from its atmosphere, and this is important because what it seems from Earth might not be the actual truth. At the same time, it is also an example of next-level, ground-breaking technology. The Sun, no doubt, is an integral part of our lives. We cannot imagine a day without sun. Being a star is also a danger. Continuous emission of solar winds can effectively destroy our technology. Thus, having as much information as possible about our nearest star is a prerequisite to nurturing our understanding of stars. Also, being a nuclear reactor, if our knowledge about the sun evolves properly, it can provide an enormous amount of energy to mankind. This can enable us to cross the huge barrier of space exploration and travel, which is speed and time.</p>]]></content:encoded></item><item><title><![CDATA[The Golden Record]]></title><description><![CDATA[If you were to communicate with aliens, what would you tell them? More importantly, how would you? Scientists attempted to answer this question while creating the Golden Records. Are they a message for extraterrestrials, or a testimony of our shared humanity?]]></description><link>https://iit-techambit.in/the-golden-record/</link><guid isPermaLink="false">6809265aa1df0805311e2d2e</guid><dc:creator><![CDATA[Sanskriti Arya]]></dc:creator><pubDate>Thu, 24 Apr 2025 11:11:11 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/04/golden-record.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/04/golden-record.png" alt="The Golden Record"><p><em>“To the makers of music — all world, all times.”</em></p><p>Hand-etched on the un-grooved portion of two phonograph records flung into deep space, carrying a heartfelt time capsule from mankind are these words. They contain (almost) everything that has been deemed dear to us: sounds of laughter, images of a mother nursing her baby, and folk music. There are no depictions of war and poverty, but there are recordings of greetings being spoken in over fifty languages, ranging from “Peace.” to “We are thinking about you all. Please come here to visit when you have time.” In essence, it sings a simple message: <em>Hello. I hope you find this and by extension, us.</em></p><p>The time capsules in question are the golden records perched atop Voyager 1 and 2, launched by NASA in 1977. Their mission was a flyby of the Jovian planets and their moons, after which they were to embark on a journey into interstellar space, never to return. This made the probes the perfect vehicles to carry a message for extraterrestrials.</p><p>Mimicking the Pioneer plaques (sent with Pioneer 10 and 11) on their covers, but massively increasing the amount of information one could send, the golden records were one of their kind.</p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdu0ekaHQYmNH1KOxI1SFPSszhqZlxt8Ha56ncmOe71FH6czjA2B6LFAAQouiGBoxYCWA2fCPMCep9i2GarehfQOccgAM6W1Wm54ySNm-2vvPLLB8OECiAmpLJBt_XdbrYNYH0Mnw?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXexOKbG3sVq3_RN335NygqcM-Vmi1SW9gi73GkWoiaIgiiOAHlDZaAsqdPd1FOJE1njO_JkZZeSAWrG5CWAKbXHWZo_rAInsC6HZ00xCP0G8Ut-gTa4yBKyR42-Ka6HwhHMZXKtYw?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><p>However, this was not the first time such an ambitious and far-fetched endeavour was undertaken. We have been communicating to the supposed emptiness of space since we gained the ability to do so, starting with the Arecibo Message: a frequency-modulated radio message carrying basic information about humanity and earth transmitted towards the globular cluster Messier 13 in 1974.</p><p>But what information does one convey in such messages? More importantly, <em>how </em>is that information conveyed to an intelligent extraterrestrial civilization which may in fact be nothing like what we have imagined aliens to be?</p><p>Carl Sagan, a planetary scientist closely involved in the creation of these messages, comments upon this in his book ‘Murmurs of Earth: The Voyager Interstellar Record’: <em>“So if it is possible to communicate, we think we know what the first communications will be about: They will be about the one thing the two civilizations are guaranteed to share in common, and that is science. The greatest interest might be in communicating information on music, say, or social conventions; but the first successful communications will in fact be scientific.”</em></p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfwd9tq7Y8XfZ-k6KsusLw4vL6dX7y4xbX4n6GfnTuWOUGQvipWax0GzTjvR7YHWkYnkaoAmYYRHN__nClG5JJ3WsaZgdZ5ZMoo8SVuSPBa58zjMuvmHc5vGVgpfgwEdOnMikehoA?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><p>This forms the heart of these endeavours. They use simple scientific and mathematical facts as a means to communicate. And there is quite a lot that needs to be communicated, starting with a set of instructions describing how the subsequent information should be received and interpreted.</p><p>For example, the Arecibo Message consisted of a continous string of 1,679 binary bits which translates to the shown image (without colour) when arranged as a grid. The number 1,679 was chosen specifically because it is semiprime (the product of two prime numbers: 73 and 23), resulting in only two ways one can form a grid using the bits. Thus, working under the assumption that a knowledge of primes and factors must be possessed by any civilization intelligent enough to intercept and interpret the radio message, we have successfully conveyed our first piece of information!</p><p>Each of the two identical golden records also comes with instructions on how to play it: on the upper left-hand corner of the cover are drawings of the phonograph record and the stylus carried with it (already positioned correctly to play the record from the beginning), indicating that it should be played from the outside in. Here, we are faced with another communicative hurdle to cross, upon which rests the communication of all quantifiable information: the conveyance of units of measurement. How does one tell the aliens that we are approximately six feet tall, or more importantly, one rotation of the golden record equals 3.6 seconds, without them knowing what a foot or a second is?</p><p>Naturally, we made use of scientific knowledge again. The time period associated with the fundamental transition of the hydrogen atom is approximately 0.70 billionths of a second, and 3.6 seconds expressed in this unit of time has been inscribed in binary around the drawing on the cover. The same has been done for the time required to play one side of the record (between 53 and 54 minutes), and other informative illustrations. Hydrogen was chosen because it’s the most abundant and the simplest of elements in the universe, increasing the chances that intelligent extraterrestrials would recognize it.</p><p>A diagram depicting the two lowest states of the hydrogen atom, with dots and lines indicating the spin moments of the proton and electron, is also given on the bottom right-hand corner, solidifying the idea that the transition time from one state to the other provides the fundamental clock reference throughout the entire process of interpretation of the record.</p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcWtxmIjaKhNiqK68lRz6dcCvsesMhBlG8FtJMrgwtq-HMYYoDN0VbNnFt5rjulvMJ2Mc_2XWHcviuYsoBQFe_5XXHO2mH3hdO3HAK2h_3PJjemr0XuqnWGntgu93JBe3hQQrunaA?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><p>The cover also features a diagram that cleverly conveys our location in space using pulsars: rapidly rotating remnants of stars which generate powerful beams of radiation, sweeping across the sky (like a stellar lighthouse!) Individual pulsars spin at different speeds, making them identifiable even with very simple radio receivers. Giving the location of our sun with reference to fourteen neighbouring pulsars is a diagram on the record cover. Each pulsar is connected to the sun by a solid line. The length of the line represents the pulsar’s approximate relative distance from our sun and along each of the pulsar lines is etched the precise frequency associated with that pulsar. This however,  is a 2-D diagram, and to convey the 3-D positions of all pulsars and our sun, a fifteenth solid line was introduced, with a tick mark at its end, indicating the relative distance between our sun and the center of our galaxy. Tick marks in all pulsar lines similarly indicate their relative distances as well, establishing their position (and ours) in 3-D space using simple concepts of triagulation.</p><p>An important part of a time capsule is knowing how long back it was put together, and to convey this, the record cover has a two-centimeter diameter area of uranium-238 electroplated on it. The steady decay serves as a radioactive clock, and examination of the leftover uranium along with its daughter isotopes gives the time elapsed since the spacecraft was assembled and launched.</p><p>When we come to the actual data in the record itself, everything that is sent must be a precarious mix of universal scientific knowledge and information exclusive to the earth and the solar system that can be extrapolated from said knowledge.</p><p>As Carl Sagan says in his book, “In choosing pictures, we were faced with two contradictory demands: the pictures should contain as much information as possible, and they should be as easy to understand as possible. It seemed to me that one solution would be to have on board some pictures with very little information, primarily to help the recipients understand how to see pictures.” It is also rather useful to include ‘checks’ that help confirm that one is indeed interpreting things the right way. Hence, the very first image, if properly translated and calibrated, will display a circle, which is also engraved on the cover of the record. Being an engraving, it can be perceived by senses other than vision, which is meant to give the recipients a way of comparing a photograph with an object they can touch.</p><p>The next set of photographs are rich in arithmetic, providing a kind of dictionary of simple mathematical and chemical symbols that have been used in subsequent images to convey information such as the size of a human being, the size of planets, the elements most abundantly found on earth etc. There are anatomical pictures of the human body, simplistic diagrams of DNA with its constituent elements listed out, highlighting its double-helical structure.</p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf22cXnxNdDX1LxJH8VYTC22MpwH4n7CbgGeQxcdpxwNEGkRl07lnyemdUD9xxFAAyiQJ6OVO17YYCi9CVSHpRiZdouml_fX1xe1_DJnDrcRy_x76BhbbiKraJPjApU363uyx5p5A?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf8PFd-WnirudVRHugCPn1KesiIETQ-c6fp34VYiNJ93PktJL6KtjhvOEF3a22QqhjKx3q6Cmbk6pkUAjozWDjCRrYsYpgcf4y3Jk1-cXkyMWYT2JnBf8eFaPXYFrFvN0ivgeM5WA?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><p>Silhouettes were also utilised, sometimes following a photograph of the same scene, as the high contrast made it easier to isolate different objects in a photograph and bring attention to some of them. There are multiple images of the natural world: seashells, dolphins leaping in the air, insects pollinating flowers, geographical features like islands and riversides, sunsets, and earth as seen from space.</p><p>There is a photograph of Andromeda, a galaxy which may also be visible to the extra-terrestrials, placed with a segment of the pulsar map from the cover, to hint towards the fact that it is another one of our stellar neighbours. As pointed out by Jon Lomberg, part of the team which designed the image and sound sequence on the record, “ it may be the only object in the whole package of pictures that both we and the recipients have seen firsthand.” (This image also provides a check to the “handedness” of all pictures—the recipients can compare it to Andromeda as observed by them to ensure that reconstructed images are not laterally inverted.)</p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc6F4801DzlxCnu3-gXYPPAjcofIg3CdVZpM4r5gLk066UfH_HTn2S7tF7w0baO43TrfIOHyCYnJQK6_o1KQuFMpwKTDoQUOW6vepj4KvUHUthXd6fCLEOAO6UaAbV-ZS3lkhPyJg?key=CryNReLA01RHtNAZG9r7emy1" class="kg-image" alt="The Golden Record"></figure><p>There are numerous images which focus on the lived experience of humans, showcasing complex but important actions like parenting, shopping, teaching, hunting and gathering, eating, researching, driving, manufacturing, painting, walking, spacewalking and playing the violin. Of course, it is unlikely that these complicated images will be interpreted in their entirety, but a complete communication of the essence of humanity would have been incomplete without their inclusion.</p><p>By sending the same image thrice, indicating the amount of red, blue, and green in each picture, one can send coloured pictures as well. Around 20 such images were sent, the first of which is the solar spectrum. Every star has a distinctive spectrum: a continuous band of colour broken by a series of dark lines, which correspond to wavelengths absorbed by elements in the star. This tells us a great deal about the temperature of the stellar surface, the “colour” of the star, and an extensive knowledge of these spectras provides much of the observational basis for studying the universe.</p><p>Extraterrestrials with basic knowledge of stellar astronomy should be able to identify the sun as a G2 star from its spectrum (even if seen in black and white!), and from there reconstruct what the spectrum of such a star should look like in colour (and by colour, of course, we mean the portion of electromagnetic radiation visible to <em>humans</em>, but the absorption lines should convey that we wish to indicate something about this particular portion of the spectrum.) Using this, they can understand the concept of colour separation, and then they’ll be able to see flowers, coral reefs, jungles, deserts, snow-studded trees, illuminated buildings, and skin tones in their original colours.</p><p>As is evident from many of the images, unlike earlier messages, the information stored in the golden records is not purely scientific. The audio segment of the record features natural sounds like that of thunder, volcanoes, rain, birds, wild dogs, and crickets. It also includes the sound of the human heartbeat, footsteps, laughter (Carl Sagan’s), heartfelt greetings in over 55 languages, and a compressed version of an hour-long recording of a woman’s brainwaves as she thought about topics like love, death and civilization.</p><p>And what is a record without music in it? An eclectic 90-minute selection of music from many cultures was included, ranging from western classics to indigenous folk music.</p><p>All in all, the golden records were as complete a time capsule as we could hope to create as a species, made under near-impossible time, budget, and administrative constraints by a group of highly motivated people. The Voyager crafts are currently the farthest man-made objects, and they only move farther every second. The records secured safely upon the spacecrafts are estimated to survive for billions of years (gold being considerably non-reactive), possibly outliving <em>all</em> of mankind, and earth itself. If intelligent extraterrestrials ever come across them, they will witness the sombre echoes of lives that lived long ago, reaching through time with warm hands.</p><p>Of course, this hinges on the assumption that extraterrestrial civilizations are scientific creatures, given that we are using “the language of science”, but the downside of developing any language, even out of science, risks it becoming too well-tailored for us. Even if scientific knowledge <em>is </em>universal, who is to say that the specific set of scientific knowledge discovered and developed by <em>us</em> is universal? What if it is possible to build an intelligent civilization without discovering hydrogen at all?</p><p>A far more absurd assumption, however, is that these extraterrestrial creatures will be somewhat similar to us in terms of how they receive and interpret sensory input. We have transmitted images and sounds, but what if they have nothing akin to eyes or ears? It is perfectly plausible for them to navigate their world using magnetic fields, have smell as their primary sense, or have other sensory mechanisms unimaginable to us. Our attempt at communication will have been futile, in that case.</p><p>This makes the occurrence of reception and interpretation of these messages by intelligent extra-terrestrials which <em>also</em> share a similar notion of science and sensory input, laughably unlikely.</p><p>What then, is the real purpose of the golden records?</p><p>A contributor to the project, B. M. Oliver, said it aptly, “There is only an infinitesimal chance that the plaque will ever be seen by a single extraterrestrial, but it will certainly be seen by billions of terrestrials. Its real function, therefore, is to appeal to and expand the human spirit.” More than an exercise in communication with unknown beings, it served as an exercise in communication with our collective conscience. It allowed us to make note of what is unique to our planet: the sound of earthquakes, birds flying in the sky, a baby held dearly by her mother.</p><p>Through the golden record, we saw ourselves in a different light: not as conquerors of planet earth, but as children of it. We saw ourselves as a civilization built upon scientific knowledge, social bonds and music; a species with no companion but themselves, hurtling through darkness wielding lights of our own. Even if the gaze of no strange eyes falls upon the golden records, their creation will not have been in vain. They have already achieved what they set out to.</p>]]></content:encoded></item><item><title><![CDATA[DNA Data Storage]]></title><description><![CDATA[Imagine all of the world's data, films, books and websites condensed into a single drop of liquid. Sounds like sci-fi?  DNA data storage could make it real. We talk with Prof Soumya De from IIT Kharagpur and explore its viability, potential and stability.  ]]></description><link>https://iit-techambit.in/dna-data-storage/</link><guid isPermaLink="false">67a3a39da1df0805311e2b8f</guid><dc:creator><![CDATA[Satadru Sen]]></dc:creator><pubDate>Tue, 01 Apr 2025 08:15:34 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/03/dna-data-storage.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/03/dna-data-storage.png" alt="DNA Data Storage"><p></p><p><em>"We are at the dawn of an era where biology and digital technology converge in the most unexpected ways."</em> </p><p>This fusion of disciplines presents groundbreaking opportunities, particularly in data storage. As data storage requires physical space and form, there is growing concern regarding the availability of enough volume to fill up tomorrow’s data. Even if we use up all of the Solid-State drives, hard disks and what not, it would still not be a stretch to say that we may end up exhausting all of our silicon and spatial reserves. But, just because we have all the options we need now, available at our hands, it is still imperative to think of revolutionary solutions to the problem. A solid idea, even a draft of what’s to come, may inadvertently solve headaches for systems engineers of government, and top firms. </p><p>Frankly, we do already have a solution ready. If we tell you storing the entire Internet estimated at over 100 zettabytes of data, in something smaller than a sugar cube is possible, would you believe it? We need to only look in the direction of the most basic unit of heredity in our body, the DNA.</p><p>With these awesome thoughts in mind, myself and my colleague Anirban Pal interviewed Professor Soumya De, Associate Professor at IIT Kharagpur to learn more and delve a little deeper into the topic.</p><h3 id="why-dna"><strong>Why DNA?</strong></h3><p><em>“The limitless option that DNA allows is the freedom to store as much data as possible, incomprehensible to human terms”, </em>in Prof. Soumya's words. Basically, molecular level data storage by sequencing genomes and storing the extracted data in clusters, has incredible potential. For those who will like to learn more about the technicalities, it translates to coding strands with information that we need to store.</p><h3 id="mechanism-of-dna-data-storage"><strong>Mechanism of DNA Data Storage</strong></h3><p>DNA data storage starts with the translation of binary data (0s and 1s) into the four nucleotide bases of DNA: <em>adenine (A), cytosine (C), guanine (G), </em>and<em> thymine (T)</em>. Encoding schemes have been devised to accomplish this translation. One such method is to assign binary pairs to particular nucleotides; e.g., '00' to 'A', '01' to 'C', '10' to 'G', and '11' to 'T'. This ensures that the binary information is correctly represented in a DNA synthesis-compatible format.</p><p>To enhance the reliability of data storage, error correction codes such as <em>Reed-Solomon or fountain codes</em> are incorporated during the encoding process. These codes add redundant information to the sequence, making it possible to detect and correct errors caused by synthesis, degradation, or sequencing mistakes.</p><p>For instance, if a DNA strand is damaged or partially lost, error correction can help reconstruct missing parts and recover the original information.</p><p>Since DNA molecules cannot be infinitely long, the encoded data is split into multiple short DNA strands, typically 100–200 base pairs each. To ensure proper reassembly, each fragment is labeled with an <strong>index sequence</strong>, similar to page numbers in a book. This allows sequencing technologies to read and reconstruct the data correctly.</p><p>After encoding binary data as nucleotide sequences, physical production of these sequences follows through DNA synthesis. In this process, the nucleotides are arranged in the prescribed sequence according to encoded information. Chemical methods of synthesis, for example, P<em>hosphoramidite synthesis</em>, are most commonly utilized. Nevertheless, DNA synthesis today remains slow and expensive with production rates quoted in megabytes per hour. Also, synthesis errors may be present, requiring strong error correction processes.</p><p>Once synthesized, the DNA strands are kept in stable conditions so they can last longer. The inherent stability of DNA and high density of information enable it to be a good candidate for long-duration data storage. Encapsulated DNA will survive decades at ambient temperatures and can probably survive longer under controlled environments, including in data centers.</p><p>To retrieve the stored data, scientists first sequence the DNA, which means reading the order of its nucleotide bases (A, C, G, and T). High-throughput sequencing technologies quickly scan the DNA strands and generate a digital version of the sequence. This sequence is then decoded using special algorithms that reverse the original encoding process, converting the DNA back into binary (0s and 1s). Since errors can occur during storage or sequencing, built-in error correction codes help detect and fix any mistakes, ensuring that the retrieved data is accurate and matches the original information.</p><p>Each one of these steps—from encoding and synthesis to storage and sequencing—relies on highly intricate bioinformatics, careful chemical engineering, and rigorous error management. There is on going research for improving all these steps to gain speed and efficiency, as well as becoming economically viable, in order to replace conventional electronic storage methods.</p><p>Here's a cool fact, the Netflix series <em>‘Biohacker’,</em> an awesome watch, has its first episode stored entirely in synthetic DNA designed by professors of <em>ETH Zurich</em>. These guys know their jobs well, setting the standard for more cutting-edge technologies for the future. This is just the beginning as we have just tapped into the vast world of Biotechnology and the wonders that await us. Using DNA gives us access to the high density of DNA (1 exabyte/mm3) as well the durability for advantage of storage. Unlike hard drives that last 3-5 years or tapes that last 10-30 years, DNA can last Thousands of years if stored properly. Also, DNA storage does not require electricity to maintain data, making it an Eco-friendly alternative. These suggestions might sound too good to be true, because they are, as a lot of discussions still need to take place on their efficiency and cost effectiveness.</p><p>Here's another mind-boggling fact. A Facebook Data Center is almost the size of ten football fields, while the same information can be held in a tablespoon full of DNA. This fact alone surely raises some eyes. But this can be made into a reality. To do that, we need full support from governments towards Biotechnology and other research. Awareness and implementing right policies can go a long way in working hand in hand with talented and hard-working scientists to make this possible.</p><p><strong>The Challenges of DNA Storage</strong><br>There is a need to educate people about the awesomeness and seriousness of this matter. There is a hardwired, preconceived notion amongst the public, add to it the media glorification, that biotechnologists are some mad scientists doing crazy mutation-based animal/ human experiments and creating supervillains. The common folk will need a lot of convincing to warm up to the idea of accepting DNA as a Data storage option. It's still relatively pricey to synthesize and sequence DNA, writing at an estimated $3,500 per megabyte, although this is expected to fall with improving technology. Writing and reading DNA data aren't yet anywhere near the speeds of hard drives today—400 bytes per second is far, far slower than SSDs. DNA is prone to mutations (small errors), so the encoding techniques used must be advanced in order to ensure correctness of data.</p><p><strong>Conclusion</strong><br>DNA storage is not about saving all your favourite movies or photos. It is, rather, the safeguarding of human knowledge and history to be passed down to future generations. It might be in the form of ancient manuscripts, medical records, or an entire library of books, but DNA might just hold the key to eternal data preservation. The next time you want to upgrade your hard drive, just imagine having all your digital life in one droplet of liquid. Believe it or not, that future is closer than you think! This idea’s fruition in daily lives would create a path towards development of us as a species and also allow us to explore the intricacies of the untapped world of our genomes.</p>]]></content:encoded></item><item><title><![CDATA[A trip to die for]]></title><description><![CDATA[As US- states move on with legalizing psychedelic-based treatments for PTSD   and severe depression, here’s a look at how a pyschedelic (drug) present inside all of us can (possibly) make our final journey a little less death-like. 
]]></description><link>https://iit-techambit.in/a-trip-to-die-for/</link><guid isPermaLink="false">65718b7a08096d053d12d070</guid><dc:creator><![CDATA[Suhani Soni]]></dc:creator><pubDate>Wed, 26 Mar 2025 10:58:04 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/03/A-Trip-to-Die-for.png" medium="image"/><content:encoded><![CDATA[<blockquote><em>“I didn't know who or what I was or even if I was.  I was simply there, a singular awareness in the midst of a soupy, dark, muddy nothingness. How could I not realize that on earth I was a doctor, husband and father? I was in a position similar to that of someone with partial but beneficial amnesia. That is, a person who has forgotten some key aspect about him or herself, but who benefits from having forgotten it. I had come from nowhere and had no history, so I fully accepted my circumstances… And because I so completely forgot my mortal identity, I was granted full access to the true cosmic being I really am.” 														</em></blockquote><img src="https://iit-techambit.in/content/images/2025/03/A-Trip-to-Die-for.png" alt="A trip to die for"><p><em>Eben Alexander, neurosurgeon and author</em></p><p>I know this sounds right out of some mystical, supernatural textbook but believe me when I say this is a verbatim account of famous American neurosurgeon Eben Alexander, who worked at Harvard Medical Facility for over 15 years and is also New York Times Best Seller author. Eben Alexander was suffering from bacterial meningitis, a extremely rare disease and was put under a medically induced coma for this own safety. It was during this coma that he is said to have experienced what the medical community terms as a Near Death Experience (NDE).</p><p>The term near-death experience (NDE) was coined by philosopher Raymond Moody more than 40 years ago. NDEs typically occur in truly life-threatening situations such as cardiac arrest, traumatic injury, intracerebral hemorrhage, nearly drowning, or asphyxia. The descriptions usually include feeling a sense of joy, peace, and love; the detachment from own physical body (out-of-body experiences), travelling along a region of darkness toward a light at the end, visions and communications with deceased relatives and friends or “beings of light”. 90% people who experience NDE also describe an altered perception of time while in the mystical realm, reporting that everything seemed to be happening at once, or that time lost all its meaning.</p><p>NDEs are measured by NDE rating scale (Greyson 1983) which consists of several measurements, resulting in a total score representing the global intensity of the experience as well as scores for four subscales: Cognitive, Affective, Transcendental, Paranormal. A total score higher or equal to 7 is considered the threshold for a NDE. </p><p>Dr. Robin Carhart-Harris was someone particularly interested in NDE, especially its relation with a drug DMT. Professor and Head of department of Psychedelic Research at Imperial College London, he was conducting research on defamed psychedelic DMT (N,N-dimethyltryptamine) which belongs to a class of serotonergic (serotonin-affecting) psychedelics that also includes Lysergic acid diethylamide (LSD) and Psilocybin. Amazonian tribes routinely consume ayahuasca (‘the wine of the soul’), a brew made from leaves of P. viridis plant which have been shown to contain DMT. <br></p><p>Unlike other psychedelics like LSD and psilocybin, DMT is also endogenously produced in animals including humans. DMT has <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5048497/">also been found</a> in small amounts in human brain tissue and larger amounts in cerebrospinal fluid, a clear fluid that surrounds the brain and spinal cord. <br></p><p>He, while conducting the research on DMT noticed that descriptions of drug-induced experience by users of the drug closely matched descriptions by people who had experienced Near Death Experience (NDE). ​​Evidence from studies involving DMT has repeatedly shown that the experience of ego dissolution (i.e., a disruption of ego-boundaries which leads to a partial or complete blurring of the distinction between the self and the rest of the environment), a feeling of transcending one’s body and entering into an alternative ‘realm’, perception of a high pitched ‘whining/whirring’ sound, perceiving and communicating with ‘presences’ or ‘entities’, plus reflections on death, dying and the after-life. <br></p><p>Dr. Robin and his team at Imperial College London took thirteen healthy volunteers participants to participate in a placebo-controlled study aimed to directly measure the extent to which intravenous DMT given to healthy volunteers in a laboratory setting could induce a near-dear type experience as determined by a standard NDE rating scale (Greyson, 1983). An important part of the study was to address how these experiences compared with a sample of individuals who claim to have had ‘actual’ near-death experiences. <br></p><p>Participants were enrolled for 2 dosing sessions in which placebo (i.e. nothing) and DMT were administered. During the first dosing session, all participants received placebo (i.e. nothing), and 1 week later, DMT. Participants were unaware of the order in which placebo and DMT were administered. The order was fixed in this way to promote safety by developing familiarity with the research team and environment prior to receiving DMT, and to avoid potential carry over effects from receiving DMT first.</p><p>Following each dosing sessions, participants completed questionnaires enquiring about subjective experiences during the DMT and placebo sessions. The Greyson NDE scale served as the primary outcome measure.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh7-us.googleusercontent.com/8oRLJCQYOkO_osTLA91CADaa3kzh-U-a_aVoxKu6hI41EU7m04KuO-w0lhUA9NK4Gt-QOOrLEhqdvfspQjSfqFm_i0XtFMsGzmiRCSxCcCzwfAb-61EEMehprSn6RsFv__fltbBi7UZcahEoJG43Jac" class="kg-image" alt="A trip to die for"><figcaption><em>Comparison of results between actual NDE and DMT induced experience</em></figcaption></figure><p><br>When these DMT data were compared with those from a matched sample of ‘actual’ NDEs, a comparable profile was evident. Not to Dr. Robin’s surprise, all participants scored above the conventional cutoff (above or equal to 7) for a (DMT-induced) near-death experience. To add to this, no significant differences were found between DMT administration and ‘actual’ NDEs.</p><p>Subtle differences that were apparent between the DMT condition and ‘actual’ NDEs may be explainable by the very different contexts in which these experiences occur (e.g., DMT was given here with prior screening, psychological preparation and consent in a safe laboratory setting vs. an NDE occuring during an illness or unexpected accident) as much as differences due to the inducers themselves or their associated neurobiologies. <br></p><figure class="kg-card kg-image-card"><img src="https://lh7-us.googleusercontent.com/c4VBmirvSULZ65zmGNL1vTr4JbxXm7O0Aj6oGVXKBKdlFOklRVSUyr0p4dpO63aXWOAa2qol5SpJ0ttiU26IGJ3OFW1eCtnaV40lkgSB_pQ5JDzY22Ll0khCMpTzt2UACtAislD49uA4QOeCaLjDyLM" class="kg-image" alt="A trip to die for"></figure><p><br>Dr. Robin’s hypothesis is that the brain releases a rush of DMT at death. After all, DMT is proven to be found in our brain too. <br>Similar study in University of Michigan witnessed this happen in rats. They directly measured brain levels of DMT as rats suffered cardiac arrest and saw the substance spike up to ten times above baseline levels, enough to trigger psychedelic effects. If a similar increase also occurs in human it might just account for NDEs and vivid dreaming near death. <br></p><p>Dr Robin Carhart-Harris, who oversaw the study said: <em>“These findings are important as they remind us that near-death experiences occur because of significant changes in the way the brain is working, not because of something beyond the brain.”</em><br><br>So just a small hit of brain-produced DMT could make our final minutes on Earth a psychedelic adventure.<br></p><blockquote><em>“I’d be scared.”</em><br><em>“Scared of what?”</em><br><em>“Scared of dying, I guess. Of falling into the void.”</em><br><em>“They say you fly when you die.”</em><br>- Feature film: ‘Enter the Void’</blockquote>]]></content:encoded></item><item><title><![CDATA[Decoding Dark Matter: The Story So Far]]></title><description><![CDATA[Despite composing nearly 85% of all matter in the universe, dark matter remains one of the greatest cosmic mysteries. In an exclusive interview, Dr. Disha Bhatia, a dark matter researcher and an IITD alumnus, decodes dark matter and the progress we have made so far. ]]></description><link>https://iit-techambit.in/the-dark-matter-enigma-the-story-so-far/</link><guid isPermaLink="false">67d5f586a1df0805311e2bf4</guid><dc:creator><![CDATA[Sriraman]]></dc:creator><pubDate>Sat, 15 Mar 2025 22:19:12 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/04/dark-matter.png" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/04/dark-matter.png" alt="Decoding Dark Matter: The Story So Far"><p>The universe keeps its secrets well—most of the matter is invisible, and we still don’t know what it is made of. In 1933, Fritz Zwicky, a Swiss astrophysicist, was studying the Coma Cluster of galaxies when he noticed something unusual. The galaxies were moving at enormous speeds that should have torn the cluster apart based on the visible mass alone; but clearly that wasn’t the case. So, he proposed an invisible substance providing the additional gravitational pull needed to keep the cluster intact.  He termed this unseen mass to be “Dunkle Materie” (Dark Matter in German). He estimated that this invisible mass was about 400 times more than what was visually observable in the galaxy cluster.</p><p>While stars, galaxies, and nebulae captivate our attention with their luminous presence, a far more abundant and mysterious entity pervades this universe. Dark Matter is said to make up 85% of all the matter in existence and the truth is: we know little about it.</p><p>Despite its overwhelming presence, this enigmatic substance does not interact with light or any other electromagnetic radiation making it impossible to detect from traditional observational instruments.</p><p>We interviewed Dr. Disha Bhatia, a FAPESP Fellow at the Instituto de Física, Universidade de São Paulo in Brazil. She received her PhD from Tata Institute of Fundamental Research in 2018 and has worked as a postdoc at The Institute of Mathematical Sciences in Chennai and at the Indian Association for the Cultivation of Science in Kolkata.</p><p>The interview excerpts are as follows:</p><p><strong>How would you explain your studies on dark matter to a lay audience?</strong></p><p>The only evidence we have for dark matter comes from its gravitational effects. Since dark matter is invisible, its presence is inferred from observed anomalies in the motion of stars, which would otherwise be expected to follow Newton’s law of gravitation.</p><p>From our physics lessons, we know that the farther a planet is from the Sun, the weaker the gravitational force it experiences, causing it to move more slowly. This is why Earth orbits faster around the Sun than Jupiter.</p><p>However, this same logic fails to explain the motion of stars around the galactic center. These stars, as Zwicky observed, moved at much higher velocities than expected based on their distance from the center. This trend is manifested in the rotation curves of these galaxies. The rotation curve is a plot of the orbital speeds of stars in a galaxy versus their radial distance from that galaxy's centre. As the distance from the galactic centre increases, the velocity is expected to decrease. But the rotation curves are observed to be flat, contrary to the expected decline.</p><p>There are two possible explanations for this discrepancy: either Newton's law of gravitation does not apply accurately at such vast distances, or there is more unseen matter exerting additional gravitational force on the stars, thereby increasing their speeds.</p><p>This anomaly is not limited to motions of stars within the galaxies alone. The same discrepancies are also observed at the scale of galactic clusters and in measurements of Cosmic Microwave Background (CMB) radiation—often seen as the static or "noise" on old TV screens. The CMB radiation is like the afterglow of the Big Bang– it’s a weak glow of microwave radiation that fills the entire universe. These anomalies are consistent with the idea of missing matter. Modified Newtonian dynamics (MOND), an alternative to dark matter, has so far failed to provide a consistent explanation across all length scales.</p><p>Hence there is great agreement with the idea of missing/invisible matter or dark matter.</p><p>While we've made observations at the galaxy scale and have a rough estimate of the amount of dark matter in our galaxy, there are still many unknowns. For instance, we don't know its mass, spin, or whether it interacts with visible matter in any way other than gravitation.</p><p>These fundamental questions form the foundation of dark matter research. It is highly interdisciplinary, drawing on insights from various branches of physics—astrophysics, cosmology, particle physics, quantum mechanics, statistical physics, and more.</p><p></p><p><strong>What are the different ways in which dark matter can be detected? What evidence do we have of dark matter as of today?</strong><br></p><p>There are three main methods for detecting dark matter: direct detection, indirect detection, and particle production.</p><ol><li><strong>Direct detection </strong>involves attempting to observe dark matter particles directly. Since dark matter exists in our galaxy, it can travel to Earth and collide with detectors. By measuring the recoils from the elastic scattering of dark matter with the detector material, we can identify potential interactions. Experiments like Xenon are designed to detect these signatures.</li><li><strong>Indirect detection </strong>focuses on looking for the byproducts of dark matter annihilations in galaxies to the known standard model particles such as gamma rays. The experiments like Fermi-LAT and AMS are used to probe the indirect signatures of dark matter.</li><li><strong>Particle production </strong>involves creating dark matter particles in accelerator experiments and looking for missing energy signals that could point to the presence of these elusive particles. Large experiments like the <strong>Large Hadron Collider (LHC) </strong>are actively studying potential dark matter signatures.</li></ol><p>The only evidence we have for dark matter so far comes from its gravitational interactions. These include the rotation curves of stars at the galactic scale, gravitational lensing at the scale of galaxy clusters, and the study of cosmic microwave background (CMB) radiation.</p><p>To briefly explain gravitational lensing: Einstein's general theory of relativity tells us that space-time is curved by the presence of mass and energy. As light travels through this curved space-time, it follows the shortest path, which appears bent due to the curvature. The greater the mass, the stronger the curvature, resulting in more bending of light. Gravitational lensing helps identify missing matter, as the observed bending of light is often greater than what can be explained by visible mass alone.<br></p><p><strong>Why do you think dark matter research is especially relevant today?</strong></p><p>Dark matter research is crucial because it makes up the majority of the mass in our universe. Galaxies have largely formed due to the presence of dark matter. As a result, understanding dark matter is key to explaining the universe's evolution, from its early stages to its present state. Ongoing research into dark matter will unlock insights into its complex nature and address fundamental questions about its mass, spin, and other properties. While visible matter is composed of quarks, leptons, and gauge bosons, dark matter—roughly five times more abundant than visible matter—must also consist of its own set of particles. Therefore, the research is relevant.<br></p><p><strong>What do you believe are some of the most important future applications of understanding dark matter? How can it potentially help in space exploration?</strong></p><p>Dark matter research is still in its early stages. First, we need to determine the particle properties of dark matter before we can explore its potential applications in the future. However, the sensitivity of current dark matter experiments has already reached a level where similarities exist between the technologies used in quantum computing and those employed in dark matter direct detection experiments. In fact, some scientists have recently used data from quantum computing experiments to help constrain the nature of dark matter.</p><p>Additionally, dark matter experiments can also serve as probes for studying radioactive decay in rocks and detecting neutrino signatures. These neutrinos can cause recoils in the detector materials in a manner similar to dark matter particles.</p><p>As for space exploration, it is to note again that dark matter is the most dominant form of matter, playing a crucial role in galaxy formation during the universe's early stages. While visible galaxies are typically around <strong>10 kpc </strong>in size, the dark matter halo—derived from the rotation curves of stars—extends to regions as far as several hundred parsecs. <em>(Note: A parsec is a unit of astronomical distance, equivalent to about 3.26 light-years.) </em>Studies of large-scale light bending due to dark matter, galaxy rotation curves, stars that are composed primarily of dark matter, and dwarf galaxies dominated by dark matter are among the key probes that can help constrain our understanding of dark matter and the universe.<br></p><p>The <strong>James Webb Space Telescope (JWST) </strong>plays a vital role in constraining the properties of dark matter by studying galaxy formation and gravitational lensing. Through these observations, JWST can map the distribution of dark matter by analyzing how light is bent by massive objects like galaxy clusters.</p><p>The <strong>Planck satellite</strong>, another important tool, was a space-based observatory launched to study the Cosmic Microwave Background (CMB) radiation in high detail. Its measurements provide crucial insights into the early universe and contribute to our understanding of dark matter by revealing its effects on the CMB and large-scale cosmic structures.</p><p>There are several other experiments, both direct and indirect, that contribute to constraining the properties of dark matter. Together, these efforts will continue to shape our understanding of this elusive substance and its role in the cosmos.</p><p><strong>What are some of the theoretical ideas and mechanisms behind the creation of dark matter?</strong> <strong>Which model do you personally support the most?</strong></p><p>There are two broad categories of dark matter production in the early universe: thermal production and non-thermal production.</p><p>Thermal production of dark matter is similar to how Standard Model particles were produced in the early universe. In this scenario, dark matter was in thermal equilibrium with other particles and could be described using thermodynamic quantities such as temperature, pressure, and density. The interactions between dark matter and other particles were frequent enough to maintain this equilibrium. In this state, dark matter particles were continuously being created and annihilated through interactions with standard model particles. However, as the universe expanded, it also cooled. There came a point where these interactions slowed down and eventually froze, resulting in a finite density of dark matter particles.</p><p>On the other hand, non-thermal production of dark matter refers to scenarios where dark matter is created without being in thermal equilibrium. This can occur through various mechanisms, such as gravitational interactions i.e via fluctuations in the vacuum during the early universe. Dark matter could also be produced through the decay of heavy particles or through the decay of primordial black holes.</p><p>Personally, I find the thermal production mechanism more appealing because it is more predictive and aligns with the way Standard Model particles are believed to have been produced in the early universe.</p><p><strong>You have done research in different areas of physics such as particle physics, astrophysics and dark matter physics. How do you think such interdisciplinary research has helped you?</strong></p><p><strong>Going forward, which particular research area are you planning to focus on?</strong></p><p>I believe that interdisciplinary science is crucial in today's era. In virtually every field, progress is driven by the combined efforts of various teams. For example, take the experiment at the Large Hadron Collider (LHC). Building a successful collider like this requires inputs from multiple disciplines, such as particle physics and experimental particle physics. However, constructing the detectors themselves also involves condensed matter physicists and engineers. It's always a collaborative effort.</p><p>The same is true for the study of dark matter, which is a problem that spans both large and small scales. On the large scale, we detect the presence of dark matter through its gravitational interactions, such as the motion of stars in galaxies or galaxy clusters. On the small scale, we seek a particle physics explanation for dark matter's nature. To truly make strides in science, having a broad understanding of various fields, coupled with a firm grasp of the specific sector you contribute to, is essential.</p><p>Looking ahead, I believe that interdisciplinary research will become even more vital. Already, there are emerging signatures of stochastic gravitational waves through pulsar measurements and scientists are trying to see whether they can be explained by a primordial phenomenon. Additionally, there are strong ongoing efforts to study stars and the cosmos as laboratories that naturally produce highly energetic particles. These cross-disciplinary endeavors will likely unlock new insights into dark matter and other fundamental mysteries of the universe.<br></p><p><strong>Areas such as particle physics and dark matter physics are highly abstract and require the need of expensive experiments to validate the theory. What are some of the challenges that you faced while researching these areas or otherwise?</strong></p><p>I think we are really in a very interesting time where several experiments have been sanctioned and are also running. I think the biggest challenge which lies ahead of us is so far null results at all experiments. Since dark matter mass can lie in a very broad range – 10^ {-22} eV to a few solar masses, it becomes hard for experimentalists to build experiments scanning all mass ranges. Some theoretical guesses are made as per where the dark matter mass can be more probable to lie. With the null results, we may have to think out of the box beyond the theoretical guesses.</p><p><strong>Having done your Masters in physics at IIT Delhi, how would you describe your time at IITD and how do you think it has benefited you? What is one memorable experience that you cherish?</strong></p><p>We had an amazing time at IITD, filled with many unforgettable memories. Our class was fortunate to be taught by Ajoy Ghatak, who very happily agreed to take a special course on Quantum Mechanics. We also had the privilege of learning from K. Thyagarajan, not only about optics but also about origami. Dilip Rangathanan, with his exceptional teaching skills, continually amazed us. And, of course, there was Prof. Ajit Kumar, who brought his own unique style to teaching. We learned not only physics but also several skills from these amazing people. The staff members were also very helpful. <br></p><p><strong>Being a young researcher yourself, what message do you wish to convey to students who wish to pursue research in theoretical physics?</strong></p><p>I think doing higher studies and research is very interesting. The most important advice I can give is to focus on understanding the basic concepts first, before jumping into internships and research projects. A strong theoretical foundation in the basics like electrodynamics, quantum mechanics, mathematical physics, statistical mechanics etc will help you in research later on.</p><p>Start slow and steady. Attend talks to see what interests you the most. One may not understand the full talk but nonetheless it will give you an overall idea of what's happening in the field.</p><p>Talk to faculty members about their research during coffee breaks or whenever you can. This helps you learn more and think about different areas of research. Research takes years of hard work. There will be challenges, so it's important to stay persistent and not lose confidence.<br><br>-------</p><p>Dark matter stands as a silent testament to how much of the universe remains unknown, even as our understanding deepens. As technology surges forward and our theoretical horizons expand, the hunt for dark matter is slowly taking shape.</p><p>The fact that we don’t yet fully understand the majority of matter in our universe proves that this is just the beginning of the story– the story of decoding Dark Matter.</p><p><br></p>]]></content:encoded></item><item><title><![CDATA[The Essence of Han Kang]]></title><description><![CDATA[In 2024, Han Kang became the first South Korean and Asian woman to win the Nobel Prize for Literature, honored for her "intense poetic prose" on humanity's fragility and trauma. We interviewed Prof. Leonard Dickens from IIT Delhi and Uma Madhu, a PhD scholar, to explore her literary significance.
]]></description><link>https://iit-techambit.in/the-essence-of-han-kang/</link><guid isPermaLink="false">67753e78170c32052635b7cc</guid><dc:creator><![CDATA[Mahima Mukherjee]]></dc:creator><pubDate>Wed, 01 Jan 2025 13:53:15 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/01/1000084381.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/01/1000084381.jpg" alt="The Essence of Han Kang"><p></p><p>In 2024, the Nobel Prize for Literature was awarded to Han Kang, the critically acclaimed author of works such as ‘The Vegetarian’ and ‘Human Acts’, and the first ever South Korean person and Asian woman to win this prize. Her ‘intense poetic prose’, to directly quote The Royal Swedish Academy, has taken the world by storm, and ever since receiving the Booker Prize in 2016 for ‘The Vegetarian’, her writings have carved out a niche in the vast domain of Korean and global literature, addressing what is often referred to as the ‘fragility of humanity’ in the cocoon of historical trauma and equally-precarious, vulnerable modernity. </p><p>For this piece, we managed to secure an interview with Professor Dickens Leonard, a professor of literary and cultural studies from IIT Delhi, and Uma Madhu, his former student and a PhD research scholar with a background in critical and political theory of contemporary literature, who recently completed her dissertation on Han Kang herself. We believe this panel of interviewees were the perfect fit to provide us with a fresh and insightful perspective into Kang’s literature and history, and help us comprehend the significance of this prize and her works with a more trained eye. It also cascaded into an eye-opening discussion on the transformative currents within literature and its crucial link with the self and the circumstances we’re surrounded by. We’re grateful to them for giving us their precious time.</p><h3 id="who-is-han-kang">Who is Han Kang ? </h3><p>Resident of Seoul, Korea, and the daughter of a teacher-novelist herself, Han Kang is a Korean language and literature graduate who’s taught at the Seoul Institute of the Arts for a decade and a half, and been a published author and poet since 1993 - when five of her poems came out in a magazine, marking the beginning of a long career spanning many more successful publications. As an exclusively Korean writer, she first became a part of discussions about literature globally following the international recognition earned by her book, The Vegetarian, in 2016, after its translation to English for the very first time. Being the only Nobel Laureate who’s written in Korean, she not only highlights the growing awareness regarding the distinct eurocentricity of such accolades in the past, but also the conscious shift to recognize diversity and uniqueness, hence. In the discipline of comparative literature, it becomes extremely important to acknowledge the world beyond just the English language - and Han Kang, fluent in English but still staunchly a Korean writer, is but one of the gems made accessible to us by the boon of translation. As an aside, Prof. Leonard adds, out of the many languages that The Vegetarian has been translated to, Malayalam and Tamil stand out as some of the only Indian languages on the list - a surprising but pleasant feat, probably made possible by the contents (and title) of the book finding resonance within a South Indian audience! Even if one were to take the politics of representation or the question of westernism out of the equation, a large number of avenues open up simply by means of awarding a Korean writer with the Nobel Prize, Prof. Leonard further points out. After all, an audience of millions is now suddenly going back to see what else she’s published, what else she’s been writing about. If she’s written about color and colonization, then on the other hand, she’s also written about a civil war that not everyone knows about. It’s discovery of histories, personalized by Han Kang - but also at the hands of her translators, however many and wherever from! There’s a lot of questions to be asked. And at the helm of it, is an author who’s turned the tides for Korean literature, by writing about the most personal of their national experiences, and receiving the highest of accolades for it.</p><h3 id="what-is-han">What is Han?</h3><p>“So, Han Kang, for me,” Uma says, smiling, in answering what drew her to Han Kang in the first place. “What stood out first was her name. It’s a really unique name.” What compels her to say this about a surname that is quite frankly not uncommon in East Asia, is the fact that ‘Han’  is  much  bigger  than   just a name. <br>As somebody who learned Korean herself for the purpose of the dissertation, Uma points out that in the Hangul (Korean) script, it’s synonymous with the number ‘1’, is often used to address the nation itself, and also lends its name to the river that runs through the middle of Seoul. But, as most pertinent to Han Kang’s work, it’s a kind of undying grief that weighs in you, and lives with you - a post-colonial ‘resentful sorrow’, that is titled an essential element of Korean identity by many. <br><br>“Better scholars than me have tried to completely define it and failed,” Uma prefaces, going on to explain. The concept was first introduced in literature by Japanese scholar, Yanagi Soetsu, during the Japanese occupation of Korea, where he characterized Korean art and culture as ‘sorrowful’ - deeming the Koreans to be a naturally sad people. This was a blunt colonial essentialization, that grew further in the 20th century, as Korea became free of its colonizers but fell into a militant autocracy - which is when Han also came to mean righteous rage, signifying  the  anger  of  the people. </p><p>An anger that is used to rise against those in power who have broken them, trampled them, and denied them their freedom. It is for these reasons that Han is repeatedly evoked as something that is untranslated - something that nobody else will understand, even as it binds together those who do. It is even something you can get treatment for in Korean hospitals, Prof. Leonard adds - a truly modern effective condition, rooted in history and medical science!There exist two schools of thought surrounding Han in the contemporary sphere. There’s few, who believe that Han is irrelevant in today’s South Korea, because life is good, people are wealthy, development is rapid and all is well. However, according to others, there is a significant disconnect between the human condition and the suddenly modernizing infrastructure.</p><p>We human beings are flesh and blood, prone to wounds and hurts - and when we are compressed into these compact, unyielding structures of capitalism, which do not have the capacity to account for - or sometimes, deliberately refuse to account for - the vulnerability of human spirit, we get caught within those very mechanisms. The first indication of how this seemingly perfect infrastructure was failing was an incident known as the Seoul  Ferry  Tragedy, where  owing  to<br>overloading of the boat and delays in rescue operations, over 300 young children drowned and hundreds more were injured.<br><br>Even more recently, in Itaewon, a crowd surge during Halloween ended up killing 150 people, resulting in a similar outpouring of public grief and shock at how the infrastructure has failed all of these people. The other sect, thus, believes that Han isn’t something you leave behind so easily. One may not become the conscripted citizen of modernity in a blink of an eye - and at the end, there is a great deal of back and forth between what constitutes a nation and what constitutes humanity there. Han Kang’s answer to all of this is simple - fragility and trauma is not entirely historical, or even poetic. It is just - the now. The unresolvable grief is simply a condition of living in modernity. And her works take this dilemma one step further and question - when the crisis is chronic, why must we aspire to wellness?</p><p>How do we do it - in fact, can anyone ever succeed in hiding and forgetting their pain in the process of creating this modern, functioning self? In fact, should we, within these circumstances, be well at all or pretend to be? (This sentiment is also exemplified through her own actions, when she refused to hold a celebratory press conference post-receiving the Nobel, stating the on-going Russie-Ukraine and Israel-Palestine conflicts as the reason. In Han Kang’s own words, </p><blockquote>“People are being taken out in body bags. So, there is nothing to celebrate in this world at the moment.”<br></blockquote><p>Han Kang’s engagement with the concept of Han is a poetic and cultural enterprise - but through all of it, it is also a project of rehumanization and writing that vulnerability and grief back into the human body. And this is precisely why, Uma adds, it is not fair to describe Han Kang’s works as merely historical - as they work within the contexts of the past to talk of the distinctly present world, and the times that we’re living in. As regardless of whether they’re set against the backdrop of Gwangju or Seoul, Han Kang’s works have the ability to transcend geography and become globally understood. And regardless of Han being deemed a Korean, untranslatable entity or not, Kang’s books expose their readers’ souls to something intrinsically indistinguishable - levelling her explorations of trauma, grief, and resilience with universal human experiences, and making Han Kang a global literary phenomenon.</p><h3 id="han-kang-s-biggest-hits-the-vegetarian-and-human-acts">Han Kang’s biggest hits-The Vegetarian and Human Acts</h3><p>The recipient of the 2016 Booker Prize was Han Kang’s The Vegetarian, a book revolving around Yeong-hye, ‘a young woman living an unremarkable life’ who decides to stop eating meat - something that is almost unheard of in South Korea. It is written in three parts, where each is told from the perspective of a different person - none of them being Yeong-hye herself, curiously enough - and is a beautifully written interrogation of normalcy, played up by the emphasis on our very ordinary protagonist.Yeong-hye is a child with an abusive war veteran for a father, she is a woman at a job she does not like, she is married to a man who does not care for her - and she has been having visceral dreams of pain and animal cruelty that make her begin to seek a world of existence that is not predicated on harm. And as the narrative progresses, at cost of giving too much away for the liking of a prospective reader, everything just kind of begins to fall apart. <br><br>While ‘The Vegetarian’ won a lot of awards globally and earned her international fame, it was ‘Human Acts’ that was her biggest success within Korea itself. Han Kang has also said that it remains her most cherished work, as the book is based on the Gwangju Uprising, a significant event in Korean contemporary history, and one that had left a deep impression on Han Kang ever since her father had told her about it as a child. Human Acts also involves changing narratives, as characters that are introduced in the initial chapters later appear as the narrator or central protagonist in the following ones - and captures the collective trauma of political violence, telling the story of how one boy’s death within the student uprising at Gwangju unfolds in a series of characters’ lives being changed a forever. It is a tale of collective heartbreak, and contrasts with The Vegetarian by being more directly political, tying individual experiences to a broader historical framework.</p><p><br>Both novels, and Han Kang’s other books such as ‘Greek Lessons’, ‘The White Book’, and ‘We Do Not Part’, are masterpieces in narrative techniques and stylistic writing, but what they also have in common is their tendency to push the boundaries between personal and political trauma, and critique social constructs while highlighting the vulnerability of human experience above all. Moreover, her seamless transitioning between these different kinds of themes highlights her versatility as an author, and make it easy to understand why her stories resonate universally, while remaining deeply rooted in Korean culture and history.</p><h3 id="conclusion">Conclusion</h3><p>In the discussion with Prof. Leonard and Uma, we talked of Han Kang and her works, the significance of winning a Nobel Prize, and a brief exposition of the themes she writes about, including their close bond with the Korean concept of Han. In the end, it is also necessary to speak to the question of - why now. As people, post the pandemic and in this age of aggressive modernization, we are beginning to lose the threads of our own humanity. For years, the matter of who we are is a question we’ve pushed to the back of our minds to focus on what we do - what we make, where we live, what happens next - but it is now, that the world seems to be coming around to trying to figure out where our lives’ worth really lies.</p><p>And the answer it seems to be at the cusp of, is exactly what Han Kang’s writings remind us of as well - a person’s worth is located not in their exterior, but inside them, in the things that break them and the pain that binds them together and in the life that they live. She places the dignity of a human soul in its suffering, and it is this perspective of viewing souls as made of glass and watching them shatter, that the meaning of humanity shines through, just by the virtue of being alive.<br><br>As Prof. Leonard concludes, it is in literature like this that there is truly a call to what it means to be a person. And from time to time, across moments of history, this question becomes the most relevant question, and writers like Han Kang - or, for instance, Franz Kafka, in his own era of fierce globalization and world wars - end up becoming stalwarts of their time for attempting to answer it, reaching beyond just penned prose to the realms of literary greatness.</p>]]></content:encoded></item><item><title><![CDATA[Physics at the Frontier: Hopfield and Hinton’s Nobel Journey]]></title><description><![CDATA[The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for their work connecting physics and machine learning. Their innovations, like Hopfield networks and Boltzmann machines, revolutionized neural networks, enabling advancements in quantum science and materials analysis.]]></description><link>https://iit-techambit.in/a-shift-in-the-boundaries-of-physics/</link><guid isPermaLink="false">6767dd99170c32052635b6ec</guid><dc:creator><![CDATA[Arnav Raj]]></dc:creator><pubDate>Wed, 01 Jan 2025 13:05:01 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/01/1000084385.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2025/01/1000084385.jpg" alt="Physics at the Frontier: Hopfield and Hinton’s Nobel Journey"><p><strong>Expanding the Boundaries of Physics</strong></p><p>When the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield and Geoffrey E. Hinton, many were caught off guard. These researchers built their reputations in the domains of neural networks and machine learning—areas more often associated with computer science and cognitive science than with physics. Yet, by selecting them, the Academy highlighted how modern physics can include insights drawn from statistical mechanics, thermodynamics, and complex systems theory, and then apply these insights to understanding how information can be represented and learned. Far from the traditional images of colliding particles or orbiting planets, their work deals with patterns, states, and probabilities, but its theoretical roots and guiding principles come directly from physics.</p><p><strong>Physics as a Source of Concepts for Learning Machines</strong></p><p>For much of its history, physics focused on describing matter and energy with clear mathematical laws, often at a fundamental level. Over time, however, physicists have tackled ever more complex phenomena, sometimes involving large collections of interacting elements that cannot be fully described by simple equations. Statistical mechanics—a branch of physics developed in the late 19th and early 20th centuries—provided tools to handle these complex systems. It introduced energy functions, probability distributions over states, and equilibrium concepts that help us understand how properties of large ensembles emerge from interactions among individual parts.</p><p>It was this perspective that Hopfield and Hinton applied to neural networks. Artificial neural networks are collections of simple units, or “neurons,” connected by adjustable weights. The challenge is to find a way to set these weights so that the network can learn patterns, store memories, or model data. Hopfield and Hinton showed that principles from statistical physics, such as seeking states of minimum energy or following probability distributions shaped by an energy landscape, are directly applicable to these computational problems. By doing so, they built a bridge between physical intuition and computational learning.</p><p><strong>Hopfield Networks: Energy Functions and Associative Memory</strong></p><p>In the early 1980s, John J. Hopfield, originally trained as a physicist, proposed a class of neural networks now known as Hopfield networks. A Hopfield network is made up of binary neurons, each of which can be in one of two states, often represented as +1 or -1. Every pair of neurons <em><strong>i</strong> </em>and <strong><em>j</em></strong> has a connection weight <em><strong>wij</strong></em>​, and each neuron may also have a bias term. The network’s energy function, a key concept borrowed from physics, is defined as:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/12/Screenshot-from-2024-12-22-15-25-55.png" class="kg-image" alt="Physics at the Frontier: Hopfield and Hinton’s Nobel Journey"><figcaption><strong>Hopfield Network Energy Function</strong></figcaption></figure><p>where<em><strong> si</strong></em> represents the state of neuron <em><strong>i</strong></em> and <em><strong>θi</strong></em>​ is a bias term that can shift the neuron’s preferred state. Typically, the weights are symmetric (<em><strong>wij</strong></em>=<em><strong>wji​</strong></em>) and there are no self-connections (<em><strong>wii</strong></em>=<em><strong>0</strong></em>).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/12/hopfiled-enegy-diagram.png" class="kg-image" alt="Physics at the Frontier: Hopfield and Hinton’s Nobel Journey" srcset="https://iit-techambit.in/content/images/size/w600/2024/12/hopfiled-enegy-diagram.png 600w, https://iit-techambit.in/content/images/2024/12/hopfiled-enegy-diagram.png 750w" sizes="(min-width: 720px) 720px"><figcaption><strong>Energy Landscape of a Hopfield Network, highlighting the current state of the network (up the hill), an attractor state to which it will eventually converge, a minimum energy level and a basin of attraction shaded in green.&nbsp;</strong></figcaption></figure><p>This energy function is similar to those found in models of magnetic materials, where spins can point up or down. Just as a physical system tends toward states of lower energy, a Hopfield network evolves its neuron states until it settles into a stable, low-energy configuration. Hopfield showed that if the weights are chosen according to specific learning rules—essentially correlating pairs of neurons that should be active together—these stable configurations can store memories. Presenting a part of a stored pattern as input nudges the network to converge on the full pattern, achieving a form of associative memory. In other words, the system uses physics-inspired energy minimization to retrieve a complete memory from incomplete cues.</p><p>This connection between neural networks and physical energy landscapes was a breakthrough. It placed computation in a familiar physical context: learning corresponded to sculpting an energy landscape so that certain patterns lie in deep “wells,” and recall involved rolling downhill into one of those wells. This analogy made it easier to analyze, understand, and eventually improve neural network models.</p><p><strong>Hinton’s Boltzmann Machines: Probabilistic Models and Learning from Data</strong></p><p>Around the same period, Geoffrey E. Hinton, trained in psychology and computer science but deeply influenced by the ideas of statistical physics, introduced another class of neural networks known as Boltzmann machines. These networks also contain units that can be in binary states, but they incorporate a probabilistic approach inspired by the Boltzmann distribution from thermodynamics. Instead of settling deterministically into the lowest-energy state, a Boltzmann machine explores many states with probabilities governed by the energy function. The energy of a Boltzmann machine configuration can be written as:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/12/Screenshot-from-2024-12-22-15-41-09.png" class="kg-image" alt="Physics at the Frontier: Hopfield and Hinton’s Nobel Journey"><figcaption><strong>Boltzmann Machine Energy Function</strong></figcaption></figure><p>where <em><strong>si</strong></em> is the state of neuron <em><strong>i</strong></em> and <em><strong>bi</strong></em>​ is a bias for that neuron. The probability of the system being in state <em><strong>s</strong></em> is given by the Boltzmann distribution:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/12/Screenshot-from-2024-12-22-15-37-57.png" class="kg-image" alt="Physics at the Frontier: Hopfield and Hinton’s Nobel Journey" srcset="https://iit-techambit.in/content/images/size/w600/2024/12/Screenshot-from-2024-12-22-15-37-57.png 600w, https://iit-techambit.in/content/images/2024/12/Screenshot-from-2024-12-22-15-37-57.png 686w"><figcaption><strong>Boltzmann Distribution</strong></figcaption></figure><p>where <em><strong>T</strong></em> is a parameter analogous to temperature, <em><strong>kB</strong></em>​ is Boltzmann’s constant, and <strong><em>Z</em></strong> is a normalization factor (partition function) that ensures all probabilities sum to one.</p><p>When “learning” from data, a Boltzmann machine adjusts its weights so that the probability distribution represented by the network matches the distribution of observed data. Learning involves a two-step process: first, measure how often features occur together in real data, and then compare this to how often they occur in states sampled from the model. By adjusting weights to reduce any differences, the model gradually improves its internal representation of the data. This process requires sampling many states, akin to a physical system exploring its energy landscape, and pushing the network toward a distribution that captures the structure of the input data.</p><p>Hinton’s contribution lay in showing that these statistical physics principles—using probability distributions over configurations, adjusting weights to match observed statistics, and viewing the network as a system that can fluctuate among states—offer a powerful approach to machine learning. His Boltzmann machines provided a blueprint for building models that do not just store patterns but learn entire probability distributions. This concept became central to many later developments in machine learning, including deep learning methods that underlie many present-day applications.</p><p><strong>From Theory to Practice</strong></p><p>When Hopfield and Hinton first introduced these ideas, computers were slow, data scarce, and interest mostly academic. Hopfield networks and Boltzmann machines were small and not always practical. Yet their formulations built on solid physical and mathematical foundations, and that durability allowed these ideas to influence more advanced methods. As computing power grew and large datasets became available, researchers refined these concepts into more tractable forms.</p><p>One of the key refinements was the development of “restricted Boltzmann machines” (RBMs), a simplified version of the Boltzmann machine. RBMs removed certain connections to make learning faster and more stable. Stacking multiple RBMs led to “deep belief networks,” which helped kickstart the modern deep learning revolution. Although today’s deep learning systems often rely on gradient-based methods like backpropagation rather than pure Boltzmann machine training, the underlying idea that a network can learn complex distributions, guided by principles resembling statistical physics, never disappeared.</p><p><strong>Applying These Methods in Physics</strong></p><p>As the field of machine learning advanced, physicists began to realize that these tools could help solve problems once thought too complex for direct analytical methods. Particle physics experiments, such as those at the Large Hadron Collider, generate enormous volumes of data every second. Identifying subtle patterns—rare particle decays or unexpected anomalies—requires sifting through this data in ways that human analysts or simple formulas cannot match. Neural networks, building on Hopfield’s and Hinton’s foundational ideas, can rapidly classify events, detect anomalies, and find meaningful structures hidden in noise.</p><p>In quantum physics, many-body systems can be so complicated that writing down an exact wavefunction is impossible for large numbers of particles. Machine learning methods, often informed by energy-based or probabilistic models, can approximate these wavefunctions numerically. By treating the wavefunction as something that can be learned (much like a probability distribution in a Boltzmann machine), researchers can find more efficient ways to describe and simulate quantum states. This opens paths to tackling quantum problems that were once out of reach.</p><p>Materials science also benefits. Predicting the properties of a new material before synthesizing it is a complex challenge. Neural networks can learn patterns relating a material’s structure to its properties, guiding experimentalists in selecting candidates worth exploring in the lab. This reduces trial and error and speeds up innovation.</p><p><strong>A Subtle Redefinition of Physics</strong></p><p>Some critics question whether awarding the Nobel Prize in Physics to scholars whose primary reputation lies in neural networks stretches the definition of physics. Yet, the methods Hopfield and Hinton created are grounded in physical thinking. The analogy to energy landscapes, the probabilistic distributions drawn from thermodynamics, and the idea of equilibrium states are all directly borrowed from physics. Their networks represent a fusion of physical intuition with computational goals. This did not merely apply physics to another field; it advanced physics by providing a broader set of conceptual tools that can handle complexity and uncertainty.</p><p>Their work teaches that physics need not only be about particles, forces, and fields. It can also involve frameworks that treat information, patterns, and learning as physical processes with energy functions and probability distributions. These methods help physicists unravel problems where the complexity resists simple closed-form solutions. The Nobel Committee’s recognition underscores that physics can embrace such methods, and that doing so expands the frontiers of what physics can tackle.</p><p><strong>Looking to the Future</strong></p><p>The legacy of Hopfield and Hinton’s work is not just in the tools we have today, but in how it encourages physicists to think. Future physicists will likely see machine learning as a standard part of their training, as essential as differential equations or experimental design. They will treat large datasets from particle detectors, gravitational wave observatories, and quantum experiments not as insurmountable challenges, but as problems well-suited to methods that Hopfield and Hinton pioneered. They will also recognize the importance of understanding the energy landscapes and probability distributions that govern complex systems, whether those systems are collections of neurons, atoms, or spins.</p><p>At the same time, Hopfield and Hinton’s achievements raise important questions about how these tools are used. As machine learning systems become embedded in everyday life, questions about fairness, bias, and unintended consequences arise. Although these concerns extend beyond physics, the Nobel Prize acknowledges that these computational tools are now part of the physicist’s world. Physicists can no longer ignore how their methods might be used, or misused, outside the lab.</p><p><strong>A Balanced Legacy</strong></p><p>John J. Hopfield and Geoffrey E. Hinton’s contributions began with an effort to understand how simple models of interconnected units could store and recall patterns, or learn statistical structures from data. Grounded in physical concepts of energy and equilibrium, these networks provided a new way of modeling complexity. Over time, they inspired a wide range of powerful techniques that now shape research both within and beyond physics.</p><p>By awarding them the Nobel Prize in Physics, the Academy has affirmed that the spirit of physics—seeking fundamental principles and using them to understand the world—can manifest in unexpected ways. The methods that Hopfield and Hinton developed show that physics-based thinking can illuminate not only the cosmos and subatomic particles, but also the patterns hidden in data and the processes of learning itself. In this sense, their work stays true to the deepest values of physics: using mathematical frameworks and careful reasoning to reveal order and meaning in a complex and often puzzling reality.</p>]]></content:encoded></item><item><title><![CDATA[The British have done it again!]]></title><description><![CDATA[Nobel Prize in Economics 2024 was awarded to Acemoglu, Johnson and Robinson for their research on the role of institutions in shaping economic growth. We spoke with Prof. Abhijit Banerji from IIT Delhi, about the impact of inclusive policies and the challenges posed by systems of inequality.]]></description><link>https://iit-techambit.in/its-the-british-again/</link><guid isPermaLink="false">6769572b170c32052635b782</guid><dc:creator><![CDATA[Suhani Soni]]></dc:creator><pubDate>Wed, 01 Jan 2025 13:02:40 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2025/01/1000084383.jpg" medium="image"/><content:encoded><![CDATA[<blockquote><em>An ‘iron law of oligarchy’ means that even when oligarchs are overthrown, the revolutionaries, like the pigs in Animal Farm, often come to resemble them.</em><br><em>‘New leaders overthrowing old ones with promises of radical change bring nothing but more of the same’. </em><br><em>Understanding how change doesn’t happen is as important as understanding why it does.</em></blockquote><img src="https://iit-techambit.in/content/images/2025/01/1000084383.jpg" alt="The British have done it again!"><p>Switzerland's GDP per capita in 2023 was over $89,000, which is a staggering 712% of the world's average. Sudan, on the other hand has a per capita income of $495. <br></p><p>Why are some countries so overwhelmingly rich, while some barely manage to survive?<br></p><p>Is it geography, culture or just pure luck? AJR disagree. They think it is the British. <br>And they also won a Nobel Prize for it. </p><p>(PS. Economists Daron Acemoglu, Simon Johnson, and James Robinson are collectively known as AJR)<br></p><p>Traditionally, most economists study the growth of countries by exploring factors like labor, machinery, education, technology, savings etc. These are what are known as the "proximate causes" of growth as they directly impact the economy. For example, if a country builds more factories or improves on education, its economy will most likely benefit. <br></p><p>But AJR focus on a deeper question: <em>What determines how much a country invests in education or factories in the first place?</em> They argue that the answer lies in institutions—the systems and rules that structure society. <br></p><p>They broadly categorize institutions into two types : Inclusive and Extractive institutions. <br></p><p>As the name suggests, inclusive institutions are those which ensure fairness, protect property rights, and create opportunities for everyone, while extractive institutions favor a small group of elites at the expense of the majority. <br></p><p>Inclusive institutions create modern, wealthy economies that are driven forward by technological innovation, not merely propped up by having won the natural-resources lottery.<br></p><p>We interviewed Prof. Abhijit Banerji, from the Economics Department at IIT Delhi about AJR's key ideas, why their research is so pivotal, yet so controversial and how it may apply to India. <br></p><p><strong><em>So, what were the ideas that won this year’s Nobel prize in economics?</em></strong></p><p>In their famous paper, <em>The Colonial Origins of Comparative Development</em> (2001), AJR examined how colonialism shaped modern institutions. They studied historical data to understand why some former colonies, like the United States and Canada, became wealthy, while others, like many African and South Asian countries, remain poor.</p><p>Their findings revealed an interesting pattern: in places where European settlers faced low mortality, such as the US and Australia, they established inclusive institutions to benefit themselves and future generations.</p><p>Diseases like malaria and cholera that spread in countries like India were deadly for the British, who didn’t have the immunity that the indigenous people had developed over the years. In regions where settler mortality was high due to such diseases, colonizers focused on extracting resources and wealth, setting up extractive institutions.</p><p>Also, in countries like India, Brazil, Mexico and many parts of Africa, a large population and higher protests or resistance made it difficult for them to maintain control. But once they held on, they exploited these large populations for cheap labour and resources. And since there was plenty of labour already, fewer European settlers moved to these places. The systems they set up, known as extractive systems, benefitted the colonisers at the expense of the local population. <br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/12/image-2.png" class="kg-image" alt="The British have done it again!" srcset="https://iit-techambit.in/content/images/size/w600/2024/12/image-2.png 600w, https://iit-techambit.in/content/images/size/w1000/2024/12/image-2.png 1000w, https://iit-techambit.in/content/images/size/w1600/2024/12/image-2.png 1600w, https://iit-techambit.in/content/images/2024/12/image-2.png 1692w" sizes="(min-width: 720px) 720px"><figcaption><em>Reduced form relationship between GDP of nations and their settler mortality</em></figcaption></figure><p>Even after independence, it was too costly for the new governments to overhaul these exploitative systems. So in some places, they continued using forced labour to maximise production, even if that meant keeping their economies stuck in poverty.<br></p><p>On the flip side, places like Australia, Canada, New Zealand, the US, Hong Kong and Singapore had smaller populations, so European settlers moved into these places to make up for the labour shortage. The systems they created were more inclusive and benefited everyone since there wasn’t much they could exploit. And thus, these countries went on to prosper.</p><p>This legacy still affects these countries today because institutions change very slowly over time. In their research, AJR used statistical methods to show that good institutions directly lead to higher GDP per capita.</p><p>“AJR” also argue that nations with democratic institutions have the most economic growth. In a dictatorship or monarchy, political power is narrowly distributed and relatively unconstrained. In such cases, AJR argue, a small ruling class will tend to use its power to restrict competition and extract wealth for itself. By contrast, if political power is widely distributed across diverse groups in a society, then their common interest in doing business, and competition among them, will result in prosperity-generating economic institutions.<br></p><p>AJR claim, decentralized democratic systems such as those found in the U.S., Germany, and Switzerland foster economic prosperity by improving a nation’s ability to innovate. Democracies with more centralized power are less productive. (Think of countries such as France, Portugal, and Greece, which have less separation of powers, fewer checks and balances, and relatively weak state and local governments.) Finally, one-party states and authoritarian regimes—those with even more centralized and less competitive political systems—breed stagnation.<br></p><p><em><strong>But, it’s been so long since colonialism. Why haven’t extractive institutions changed with independence?</strong></em></p><p>Prof. Banerji highlighted an important point from AJR’s research: institutions are difficult to change, especially when there are conflicts between elites (the powerful) and ordinary citizens. Elites often resist changes that could threaten their privileges. Extractive institutions favour those in power, so those in power ensure that things stay as they are.</p><blockquote><em>“The luddites in the presidential palace or the chamber of commerce do far more damage than the protesters on the streets.”</em></blockquote><p>Also, commitment problems—when parties fail to keep their promises—make it challenging to reform inefficient institutions. <br></p><p><em><strong>Wait, what about China?</strong></em></p><p>One of the biggest challenges to AJR’s theory is the case of China. Since the late 1970s, China’s economy has grown rapidly, even though it remains an authoritarian state.</p><p>Prof. Banerji explains that China has implemented some policies that mimic inclusive institutions, even under a dictatorship. For example, in the late 1970s, China moved from collective farming to a system where farmers were given long-term leases on land, incentivizing productivity.</p><p>Similarly, town and village enterprises allowed local governments to manage businesses and keep the profits, fostering entrepreneurship. Additionally, state-owned industries were permitted to sell surplus production at market prices.</p><p>However, AJR have raised concerns about the sustainability of China’s growth and predict that over time, its economy will decline. Centralized political control and the suppression of entrepreneurs, such as Jack Ma, could stifle innovation in the long run.<br></p><p><strong><em>What About the United States?</em></strong></p><p>AJR often describe the United States as a model of inclusive institutions. Over centuries, its democratic system, protection of property rights, and open markets have driven innovation and prosperity.</p><p>What then has caused the state of democracy to be how it is right now in the USA with the current President denying the last election’s results? What has led to the downfall of democracy considering it seemingly had inclusive institutions? How can an inclusive system fall apart?</p><p>Political polarization and challenges to democratic norms, raise questions about the future of US institutions. Prof. Banerji suggested that institutional imbalances, like lifetime Supreme Court appointments and excessive power in the executive branch, could weaken inclusivity. Additionally, economic discontent among workers displaced by globalization has eroded trust in institutions, fueling populist movements.</p><p>To stop our slide into national dysfunction, political leaders need to focus on those who've been left behind and give them a leg up and a stake in the system. <br></p><p><em><strong>What do Inclusive Institutions look like, here in India?</strong></em></p><p>Prof. Banerji cited examples from India to show how inclusive reforms can help.</p><p>In the 1980s, farmers in West Bengal were given long-term land leases through Operation Barga, which incentivized them to invest in agriculture. Combined with the introduction of high-yield rice varieties, this reform significantly improved productivity and incomes. Inclusive institutions can also reduce barriers to education and employment, enabling upward mobility.</p><p>However, he emphasized that continuous innovation and targeted policies are needed to address systemic inequalities. For example, in the US, inadequate retraining programs for workers displaced by globalization have deepened economic divides.<br></p><p><em><strong>We have decentralized democracy here in India. Why have we lagged behind?</strong></em></p><p>India is a democratic country with a decentralized system, but it has often lagged behind in creating inclusive institutions. Prof. Banerji pointed to several challenges, including historical legacies like the caste system and zamindari system, which concentrated power and wealth in the hands of a few, restricting social mobility.</p><p>Also, in the mid-20th century, India prioritized heavy industries like steel and cement at the expense of light manufacturing and exports, diverting investments from sectors with higher potential returns.</p><p>Additionally, crony capitalism, where a few large firms dominate the market, stifles competition and innovation, creating extractive tendencies even in a democratic system.<br></p><p><strong><em>How can we apply AJR’s research in policy?</em></strong></p><p>When asked how policymakers could use AJR’s insights, Prof. Banerji stressed the importance of incremental change rather than radical revolutions. Institutions take time to evolve, and abrupt shifts often face resistance. He suggested strengthening property rights and reducing bureaucratic red tape to encourage entrepreneurship and investment. Additionally, investing in education, healthcare, and infrastructure can create equal opportunities for all. Encouraging entrepreneurship through fair competition and access to credit is another way to foster inclusivity.<br></p><p><strong><em>Why Are Nobel Prizes in Economics So Concentrated?</em></strong></p><p>A <a href="https://conference.nber.org/conf_papers/f204525.pdf">recent study</a> found the institutional and geographic concentration of awards in economics is much higher than in other academic fields. Almost all the winners of major awards have had to journey through one of the top US universities (limited to less than ten) in their career.</p><p>Prof. Banerji explained that economics research often requires significant funding and access to large datasets, which are more readily available at top institutions. He also noted that India’s education system, with its focus on rote learning, may not encourage the kind of independent thinking needed for groundbreaking research.<br></p><p><strong><em>Why India Lags Behind in bringing a Nobel</em></strong></p><p>India’s underrepresentation in Nobel Prizes extends beyond economics to other disciplines as well. Prof. Banerji suggested several reasons, including a lack of investment in research, limited state support for structuring and funding research, and an education system that prioritizes rote learning over creativity. Talented individuals often move abroad for better opportunities, contributing to a concentration of expertise in countries like the US.<br></p><p><em><strong>Bringing Change to India</strong></em></p><p>AJR advocate for "quiet change"—small, steady reforms rather than radical revolutions. Prof. Banerji suggested focusing on improving investments by ensuring resources flow to the most promising sectors, such as technology and light manufacturing. He also emphasized the need for reforms in voting systems to balance representation and decision-making and building human capital by strengthening education and healthcare systems to create a skilled and healthy workforce. Even incremental steps, he argued, can lead to significant progress over time.</p>]]></content:encoded></item><item><title><![CDATA[From Vision to Venture: The Inspiring Journey of Sanjeev Bikhchandani]]></title><description><![CDATA[Discover the remarkable journey of Sanjeev Bikhchandani, the visionary who transformed a simple job-listing idea into Naukri.com, India's leading job portal. His story is a testament to the power of innovation, grit and going against the norm.]]></description><link>https://iit-techambit.in/from-vision-to-venture-the-inspiring-journey-of-sanjeev-bikhchandani/</link><guid isPermaLink="false">660316c208096d053d12d311</guid><dc:creator><![CDATA[Suhani Soni]]></dc:creator><pubDate>Tue, 29 Oct 2024 09:50:49 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2024/10/Untitled--30-_page-0001-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://iit-techambit.in/content/images/2024/10/Untitled--30-_page-0001-1.jpg" alt="From Vision to Venture: The Inspiring Journey of Sanjeev Bikhchandani"><p><em>Dream big, start small</em></p>
<!--kg-card-end: markdown--><p>Born into a middle-class family of doctors and engineers, Sanjeev Bikhchandani was always taught the importance of hard work. His elder brother studied at IIT Kanpur, and the expectations for him were just as high. Such was the rigour at his home, that it was presumed that he would do either medicine or engineering. But soon, it was found out that he was color-blind, hence closing the doors for some of those prospective careers. He graduated from the reputed St. Stephen’s College in Delhi, specializing in Economics. He then went on to pursue an MBA at the esteemed IIM Ahmedabad, which proved to be a pivotal point in his life as it was here that the seed for his first venture was first sown.<br><br>Sanjeev was a volunteer at his campus placements at IIM Ahmedabad when he noticed a huge tussle between two major companies trying to secure the best talent at the institute. This deeply astounded young Sanjeev, because these were grown men - senior managers on top of that, and top executives at these big companies who were screaming and shouting just to recruit the most talented students amongst the crowd that had applied. That was when he realised the price companies were willing to pay to recruit the best of the best. He had found a gap that he could capitalise on.<br><br>Sanjeev was never scared to go against the norm. He always had a knack for experimentation. He ditched placements at the campus, going on to approach companies on his own, writing to them requesting for an interview. He went on to secure a comfortable marketing job at GlaxoSmithKline, manufacturer of Horlicks.<br>It was at this job that he noticed that all his colleagues - most of them from IIMs would be reading the magazine “Business India” back to front, barely even reading the articles in the front. This was because the last few pages of the magazine were filled with job advertisements. Bikhchandani thought to himself that if someone builds a database of jobs, and keeps it updated with all the latest job postings, then it could be a very powerful product indeed. It was one of the many ideas that he toyed with at this time.<br><br>He continued to work, but always determined to work for his own self, the job left him deeply unfulfilled. Even the substantial monetary compensation at his job wasn't enough to make him stay. So, just after 18 months there, he decided it was time to leave - right about the time he ended up launching his first venture under the trademark - InfoEdge. In hindsight, he says, “I managed to quit my well-paying job only because I had confidence in myself."<br><br>He moved into the servant quarters at his father’s place, paying him Rs. 800 as monthly rent. InfoEdge initially sold salary surveys &amp; reports detailing what companies were offering to MBA and engineering freshers in IIT/IIM campuses. Since the business was at a very novice stage and not growing fast, they often faced financial crises. This resulted in him largely living off of his wife’s salary, and give tuition classes on the weekends as a side income.<br><br>Cut to October 1996, when Bikhchandani was visiting an IT Expo at Pragati Maidan, Delhi, and he came across a stall with a sign that read ‘WWW’. He was told it stood for the “World Wide Web” or the internet. The stall manager gave him a demo on a black-and-white screen and went to a site called “Yahoo” and asked Sanjeev what he wanted to “browse" on "the internet”. And like any good Indian, he asked him to search for ‘India’. A whole bunch of results popped up and Bikhchandani, clearly impressed, made up his mind in 10 minutes to launch a job aggregation website which he called Naukri.com.<br><br>At that time, all major servers were based in the USA. So, Sanjeev called up his brother at UCLA business school and asked him to rent a server for him. At that time, it cost $25 monthly. He didn't have the money then but he promised to pay him back when he was able.<br><br>In April 1997, he launched Naukri.com with some 1,000 ads taken out of various magazines. He was hoping that over time companies would begin to approach them directly for advertising jobs if they managed to gather sufficient traffic on their website.<br><br>This turned out to be true quite soon, credits to Naukri’s unique pricing strategy. Their price per listing was ₹350 and an annual subscription was ₹6,000, where the company could list all jobs for one year. Their competition, i.e. magazines and newspapers priced a single job listing for ₹3,500. Inevitably, Naukri was a huge hit among the young population, encompassing a huge database of all kinds of professional jobs. Over time, companies and preferred online listings over papers ads, and hence traffic at Naukri grew exponentially.<br><br>Since then, there has been no looking back for InfoEdge.<br><br>After 10 years, Naukri.com became the first dot com firm listed in India.InfoEdge went on to establish several more website businesses like Jeevansathi.com, Shiksha.com, 99acres.com. Along with managing InfoEdge, Sanjeev is an active investor, with over 15.3% stake in Zomato and 13.3% stake in PolicyBazaar making his networth a staggering $2.7 billion.<br><br>Education, according to Sanjeev, is the single most effective investment to turn somebody’s life around. In 2014, he co-founded Ashoka University, a not-for-profit educational institute in Haryana, focussed on the liberal arts.  He believes that quality education at mass scale is paramount for the progress of any nation.<br>Nowadays, Sanjeev starts his day at 6 am in the morning, with meditation and yoga. Then, most of his day is spent managing InfoEdge diverse portfolio.<br>As for what he looks for in the startups he invests in - a combination of compelling value proposition and committed founders. Value proposition is important because it is evidence that there is a need for the product - after all, one doesn't want to have to convince people of a problem after a solution has been built. But equally as important as a good value proposition, in his opinion, are founders with calibre. Founders with grit and vision can turn around any value proposition, however bad.<br><br>He believes there are no bad investments, only investments he earns or learns from. If an investment doesn’t work out, it can be due to various reasons, often completely out of control of the founder - like too much competition, money mishandling, change in rules by the government, or maybe it’s just a pure mistake. Whatever it is, and however the investment turns out, it is important to be respectful to all entrepreneurs. Because when a entrepreneur fails, his hardwork of 7-8 years, if not more, fails. It is easier to make a company work than to close it down. Even if an investment turns sour due to disintegrity on the part of founder, he emphasises that it is a opportunity to learn to gauge people’s character better.<br><br>Needless to say, in the world of startups, Sanjeev Bikhchandani is the master architect, building careers and companies alike.</p>]]></content:encoded></item><item><title><![CDATA[From Delhi to the Desert: IIT Delhi's Abu Dhabi Venture]]></title><description><![CDATA[Indian Institute of Technology Delhi (IIT Delhi) has launched its first international campus in Abu Dhabi, bringing the prestigious legacy of IIT Delhi to the vibrant landscape of the UAE.]]></description><link>https://iit-techambit.in/from-delhi-to-the-desert-iit-delhis-abu-dhabi-venture/</link><guid isPermaLink="false">6660408608096d053d12d489</guid><dc:creator><![CDATA[Tejasraj Mangla]]></dc:creator><pubDate>Tue, 29 Oct 2024 09:27:46 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2024/10/WhatsApp-Image-2024-10-28-at-1.40.23-AM.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2024/10/WhatsApp-Image-2024-10-28-at-1.40.23-AM.jpeg" alt="From Delhi to the Desert: IIT Delhi's Abu Dhabi Venture"><p>Indian Institute of Technology Delhi (IIT Delhi) has launched its first international campus in Abu Dhabi, bringing the prestigious legacy of IIT Delhi to the vibrant landscape of the UAE, marking a significant milestone in global education. This expansion aims to foster innovation and knowledge exchange between India and the UAE.</p><h4 id="vision-and-mission"><strong>Vision and Mission</strong></h4><p>The establishment of the IIT Delhi Abu Dhabi campus is a testament to the strong educational and cultural ties between India and the UAE.  It aims to create a collaborative environment that bridges Indian and Emirati educational excellence, focusing on advanced research and innovation in areas like sustainable energy, climate studies, and cutting-edge technologies. This initiative follows a historic Memorandum of Understanding (MoU) signed in July 2023 between IIT Delhi, the Abu Dhabi Department of Education and Knowledge (ADEK), and the Indian Ministry of Education.</p><figure class="kg-card kg-image-card"><img src="https://iit-techambit.in/content/images/2024/06/Screenshot-2024-06-05-at-16.21.13.png" class="kg-image" alt="From Delhi to the Desert: IIT Delhi's Abu Dhabi Venture" srcset="https://iit-techambit.in/content/images/size/w600/2024/06/Screenshot-2024-06-05-at-16.21.13.png 600w, https://iit-techambit.in/content/images/size/w1000/2024/06/Screenshot-2024-06-05-at-16.21.13.png 1000w, https://iit-techambit.in/content/images/size/w1600/2024/06/Screenshot-2024-06-05-at-16.21.13.png 1600w, https://iit-techambit.in/content/images/size/w2400/2024/06/Screenshot-2024-06-05-at-16.21.13.png 2400w" sizes="(min-width: 720px) 720px"></figure><h4 id="academic-programs"><strong>Academic Programs</strong></h4><p>The IIT Delhi Abu Dhabi campus offers a master's program in Energy Transition and Sustainability, aligning with the shared priorities of India and the UAE to promote sustainable development and leverage technological advancements.</p><p>Starting from the academic year 2024-25, the campus will admit its first batch of undergraduate students in September 2024, offering programs in key fields with a total of 60 seats available with each course offering 30 seats:</p><ul><li>Bachelor of Technology (B.Tech.) in Computer Science and Engineering</li><li>Bachelor of Technology (B.Tech.) in Energy Science and Engineering</li></ul><p><strong>BTech in Energy Engineering:</strong> This program will focus on sustainability and equips students with the skills to address challenges in energy production and management. It aims to cover energy resource assessment, cutting-edge technologies, and low-carbon solutions. Core subjects include Energy Resources, Network Analysis, Electrical Machines, Plasma Concepts, Power Electronics, Control Systems, Energy Systems Design, Energy Storage, Smart Grid Operations, Hydrogen Economy, and Energy Economics.</p><p><strong>BTech in Computer Science and Engineering:</strong> This program will focus on Artificial Intelligence and Machine Learning, Software Development, and Hardware Design, along with Quantum Computing, Cryptography, Blockchains, Security, and Privacy. Additionally, the curriculum emphasizes Cloud Computing and Data Analytics. It aims to develop proficiency in abstraction, computational thinking, and problem-solving techniques, preparing students for diverse industries..</p><blockquote><strong>Prof. Shantanu Roy</strong>, the Executive Director of IIT Delhi Abu Dhabi, emphasizes integrating IIT Delhi's rigorous academics with Abu Dhabi's innovative environment. Prof. Roy stated, "We aim to provide a transformative educational experience that equips students to tackle global challenges in technology and sustainability".</blockquote><h4 id="admission-process"><strong>Admission Process</strong></h4><p>The admissions for the undergraduate programs at IIT Delhi Abu Dhabi for the academic year 2024-25 will be conducted through two modes: the newly introduced Combined Admission Entrance Test (CAET) and the established JEE (Advanced) exam.</p><ol><li><strong>Combined Admission Entrance Test (CAET) </strong>: This entrance test is specifically for admissions to IIT Delhi Abu Dhabi. It will be held on June 23, 2024, in multiple cities including Abu Dhabi, Dubai, and Sharjah. The test will consist of three papers covering Physics, Chemistry, and Mathematics of 90 minutes each.</li><li><strong>JEE (Advanced) </strong>: Students who have qualified in JEE (Advanced) can also apply for admissions.</li></ol><p><strong>Eligibility Criteria for CAET 2024:</strong></p><ul><li>Candidates must have secured at least 75% aggregate marks in Class XII or be in the top 20 percentile.</li><li>Candidates must be born on or after October 1, 1999, with a two-year age relaxation applicable as per UAE national policy.</li><li>Maximum two attempts in two consecutive years for the entrance test.</li><li>Candidates should not have been admitted to any IIT previously.</li><li>For UAE residents and international students, additional criteria include valid scores in EmSAT or SAT.</li></ul><h4 id="fees-campus-accommodation-and-scholarships"><strong>Fees, Campus Accommodation, and Scholarships</strong></h4><p><strong>Tuition Fee:</strong> The annual tuition fee for the four-year course is AED 81,375, totaling AED 325,000.</p><p><strong>Accommodation:</strong> Separate on-campus housing is available for male and female students, with double and single occupancy options. The fees are AED 1,000 per month for double occupancy and AED 2,000 per month for single occupancy. There is a fully functional kitchen with a stove, fridge, and microwave. Laundry services with washer and dryer, entertainment room with latest gadgets for gaming, fitness center equipped with latest and state-of-the-art machinery, 24X7 security, and a student lounge for relaxing and socializing.</p><p><strong>Scholarships:</strong></p><p>To draw students, the institute is offering an array of attractive scholarships.</p><p><strong>For UAE National Students</strong></p><ul><li>Full scholarship covering 100% of tuition fee</li><li>A monthly stipend of AED 4,000</li><li>Housing fee waiver for double occupancy for students residing outside Abu Dhabi</li></ul><p><strong>For students admitted through JEE Advanced 2024</strong></p><ul><li>Same tuition fee as payable at IIT Delhi</li><li>A monthly stipend of AED 2,000</li><li>Housing fee waiver for double occupancy for students residing outside Abu Dhabi</li></ul><p><strong>Performance-based Scholarship for International Students and Indian Expats in the UAE</strong></p><ul><li>Tier 1: Full scholarship covering 100% of tuition fees on maintaining a minimum of 8.00/10.00 CGPA</li><li>Tier 2: 50% of tuition fees discount to eligible students on maintaining a minimum of 6.00/10.00 CGPA</li></ul><figure class="kg-card kg-image-card"><img src="https://iit-techambit.in/content/images/2024/06/Screenshot-2024-06-05-at-16.22.53.png" class="kg-image" alt="From Delhi to the Desert: IIT Delhi's Abu Dhabi Venture" srcset="https://iit-techambit.in/content/images/size/w600/2024/06/Screenshot-2024-06-05-at-16.22.53.png 600w, https://iit-techambit.in/content/images/size/w1000/2024/06/Screenshot-2024-06-05-at-16.22.53.png 1000w, https://iit-techambit.in/content/images/size/w1600/2024/06/Screenshot-2024-06-05-at-16.22.53.png 1600w, https://iit-techambit.in/content/images/size/w2400/2024/06/Screenshot-2024-06-05-at-16.22.53.png 2400w" sizes="(min-width: 720px) 720px"></figure><h4 id="infrastructure-and-research-facilities"><strong>Infrastructure and Research Facilities</strong></h4><p>The campus situated in Khalifa City has state-of-the-art infrastructure, featuring modern classrooms, advanced laboratories, and dedicated research centers. The architecture seamlessly blends functionality with aesthetic appeal.</p><p>It emphasizes cutting-edge research in sustainable energy, climate studies, nanotechnology, biotechnology, and artificial intelligence. Collaborations with local institutions like Mohamed bin Zayed University of Artificial Intelligence and Khalifa University will enhance research capabilities.</p><h4 id="future-prospects-and-expansion-plans"><strong>Future Prospects and Expansion Plans</strong></h4><p>The Abu Dhabi campus is part of a broader vision to expand IIT's global footprint. IIT Madras is engaged in discussions with the government of Zanzibar to establish a permanent campus there. This campus would focus on offering specialized programs in areas such as marine science, sustainable development, and renewable energy, leveraging Zanzibar's unique geographical location and natural resources.</p><p>Several other IITs are exploring partnerships with institutions worldwide to offer joint programs, conduct research collaborations, and exchange students and faculty.</p><hr>]]></content:encoded></item><item><title><![CDATA[Harnessing the remarkable rhizosphere]]></title><description><![CDATA[By 2050, climate change could slash global harvests by 17% while the population surges by 2.2 billion. Prof. Shilpi Sharma’s groundbreaking work on soil microbiomes offers hope. By harnessing natural soil resistance, her research aims to combat crop pathogens and boost yields sustainably.]]></description><link>https://iit-techambit.in/untitled-9/</link><guid isPermaLink="false">65b87d9c08096d053d12d206</guid><dc:creator><![CDATA[Sparsh Vyas]]></dc:creator><pubDate>Wed, 24 Jul 2024 06:08:35 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2024/08/Untitled--25-_page-0003.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2024/08/Untitled--25-_page-0003.jpg" alt="Harnessing the remarkable rhizosphere"><p><strong>The Crisis </strong></p><p>By 2050, Climate Change is predicted to have reduced agricultural harvests globally by <strong>17%</strong>, accompanied by a crippling 20% loss of arable land. However, the population is expected to rise by another <strong>2.2 billion. </strong>Combining these two statistics shows the debilitating impact of climate change and paints a clear picture of the gravity of the situation. There is a dire need for at least a <strong>50%</strong> increase in the yield of food and feed. The work of researchers such as Prof. Shilpi Sharma is indispensable to proposing viable solutions to this rapidly growing issue.</p><p>Professor Sharma finds unique, natural resistance of soils against pathogens which destroy crop yields by huge margins. Her studies have uncovered the natural ability of many soils to suppress the growth of harmful pathogens and plant diseases. This is a highly specific phenomenon, endogenous to particular soil types inhibiting the growth of a spectrum of phytopathogens. But interestingly, this ability is inducible, which opens the gates for considerable development and a possible means of combating this issue. For her groundbreaking research in this area, Prof. Sharma has been awarded the 2023 TATA Transformation prize for Food Security.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/01/tata_transformation--1-.jpeg" class="kg-image" alt="Harnessing the remarkable rhizosphere"><figcaption>Credits: Tata Sons and The New York Academy of Sciences</figcaption></figure><p><strong>The Underlying Science </strong></p><p>Before diving into the technicalities, let us first develop a working understanding of the key terms associated with her research.</p><p>The <strong>microbiome </strong>of a higher organism is the set of ‘helpful’ microorganisms associated with it that are essential for its overall fitness and cannot be separated from it. Bacteria in the human gut can be an excellent example of this. The <strong>Rhizosphere, </strong>literally root-sphere,<strong> </strong>is a dynamic zone around the plant roots, comprising their microbiome. The plant roots provide these microbes with metabolites essential for their survival. However, different plants have different profiles of exudation of metabolites, and hence possess unique sets of microbiomes. The core idea behind this symbiosis is that the plant, in practice, 'recruits' the specific set of microbes that are going to benefit it and pays them back in useful substrates.</p><p>The <strong>phyllosphere </strong>(leaf-sphere), refers to the shoot microbiome. Prof. Sharma does not target this as it is more prone to fluctuations due to increased exposure to external environmental changes.</p><p><strong>The World of Agri-Biologicals</strong></p><p>Most of agriculture is strongly dependent on chemical-based fertilisers for increasing yields. But these are simultaneously harmful to the soil in multiple ways, and when used recklessly, can make the soil highly acidic. Agri-biologicals offer a good alternative to this issue. Agri-Biologicals are biological components used to prevent diseases, increase crop production, and enhance soil fertility. They include bioinoculants (biofertilizers, biopesticides etc.) which have gained more traction since the establishment of the United Nations Sustainable Development Goals. Most experts have varied opinions on whether we need artificially made Agrochemicals at all, but Professor Sharma holds a radical view. She calls for a complete replacement of chemicals with Agri-biologicals.</p><p>However, the biggest challenge they face in India is <strong>quality control</strong>. A majority of bioinoculants in Indian markets either do not possess the microbes they claimed, or they are in much lower counts than needed for efficient performance. Another big challenge is their specificity. There are no universal strains that could perform under any environmental conditions for all crops. This becomes an even bigger problem owing to the diversity of India’s terrain. Let’s say one scientist isolates a strain from Rajasthan. Its survival and efficacy in West Bengal would be highly questionable, due to a completely different microenvironment. Moreover, in nature, these microbes are social beings who prefer living as a community instead of as individuals. Hence, using a consortium of multiple strains offers a better performance in natural conditions, analogous to an army having higher chances of survival and competence in comparison to a lone soldier.</p><p><strong>Formation of a Synthetic Microbial Community</strong></p><p>Naturally occurring microbial communities have many different, independent microbes which makes it difficult to grow them separately. While the members of a community and their functionalities are known, application requires a spotlight on members whose absence would be detrimental to the targeted affect. So, it becomes essential to select the 'core' members of a microbiome.</p><p>A synthetic microbial community is a simplified form of the natural community, comprising only of the culturable key members it cannot do without. Prof. Sharma highlights that there is always a compromise when selecting certain members of a natural microbiome, as high interdependency among the members of the natural community tends to result in diminished efficiency when the entire community is not present.</p><p><strong>The richer the microbiome, The lesser the pathogens</strong></p><p>In a rich microbiome with a wide variety of microorganisms, plants have a higher probability of finding microorganisms specific to their needs. This translates to the direct understanding that once the soil's microbiome is rich, its functionality develops better as well, and the organism becomes an 'optimum meta-organism'. This could increase suppressiveness to different plant pathogens through various mechanisms, such as directly attacking a particular pathogen by secreting anti-microbial components or leading to competition that would not let the unwanted ones thrive.</p><p>From various international research efforts, we know that organic farms have specific suppressive capabilities. As several papers state, microbial diversity is increased in organic soil due to the use of 'organic amendments' like manure and compost, which serve as a good source of nutrients for the microbiome. Therefore, the richer the microbiome, the more likely the plant is to form a stronger base and have better survival.</p><p><strong>No Aha! Moment</strong></p><p>When asked about the motivation behind this research, Prof. Sharma notes that there was no ‘Aha!’ moment - no specific point of time where they decided to start working on this. She and her team have been studying different sustainable farming practices like organic farming and conservation practices, and this project stemmed from that work. Now, she has been working on it for over six years, the process starting off when they realised that organic farms are innately suppressive. She credits a PhD scholar at her lab for coming up with impressive leads to assess the suppression of a wide range of pathogens.</p><p><strong>Further Research and Challenges Ahead</strong></p><p>Prof. Sharma believes that the biggest challenge in this area is realising a “universal synthetic microbial community” in practice. To achieve this, she and her team must study and analyse a wide range of agroclimatic zones. An ideal synthetic microbial community is one that can be applied to different kinds of conducive soils and make all of them equally suppressive to harmful pathogens. “We need a 'generalized' community at the end of the day,” she explains.</p><p>Prof. Sharma also aims to make a suppressiveness map for all kinds of soil in the country. Her long-term ambition is to include this information in a government initiative known as the Soil Health Card, which provides farmers with a report of the macro and micronutrient status of the soil. This enables them to use the chemicals judiciously and avoid harmful excesses. Moreover, Prof. Sharma wants to expand this report to include biotic components of the soil as well, owing to the indispensable role they play in predicting sustainability of agricultural models.</p><p>When asked for advice to readers, Prof. Sharma stresses the importance of finding a field of one’s interest that best utilises our individual unique talents as opposed to jumping into ‘in’ fields. "I know AI/ML, Cancer, etc. are buzz words for today’s generation," she smiles. “But it’s smart to use internships to explore as many fields as possible, until you discover your true calling.” She also finds research internships to be a good way for one to assess the kind of commitment and engagement a research career involves. “There will rarely be Einstein moments; there will be frequent failures, which are normal whenever you attempt something new." She welcomes applications for internships regardless of whether one is an undergraduate or doing a Master's or PhD, but requires at least one academic semester of commitment. Apart from pure biologists, students from disciplines such as Electrical, Civil, Computer Science and other engineering disciplines with interdisciplinary scope are welcome to contribute to her research.</p><p><a href="https://youtu.be/7THbCweDPTU?si=TV3N79ld9QY07jYJ" rel="noreferrer noopener">Here</a> is a short video on Professor’s work, made by The New York Academy of Sciences</p><p>https://youtu.be/7THbCweDPTU?si=fISjDUYz5cqymvIJ</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://iit-techambit.in/content/images/2024/01/group-Jan-2024.jpeg" class="kg-image" alt="Harnessing the remarkable rhizosphere" srcset="https://iit-techambit.in/content/images/size/w600/2024/01/group-Jan-2024.jpeg 600w, https://iit-techambit.in/content/images/size/w1000/2024/01/group-Jan-2024.jpeg 1000w, https://iit-techambit.in/content/images/2024/01/group-Jan-2024.jpeg 1600w" sizes="(min-width: 720px) 720px"><figcaption>Prof.Shilpi Sharma with her research group</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Continuous Integrated Bioprocessing: A Prospective Revolution in Biotherapeutics?]]></title><description><![CDATA[Biotherapeutics are vital but expensive. Prof. Anurag Singh Rathore from IIT Delhi is pioneering continuous bioprocessing to reduce costs by up to 75%, making these treatments more accessible.]]></description><link>https://iit-techambit.in/continuous-integrated-bioprocessing-a-prospective-revolution-in-biotherapeutics/</link><guid isPermaLink="false">65ca64f808096d053d12d26a</guid><dc:creator><![CDATA[Gauri Maurya]]></dc:creator><pubDate>Wed, 24 Jul 2024 06:07:14 GMT</pubDate><media:content url="https://iit-techambit.in/content/images/2024/08/Untitled--25-_page-0001.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://iit-techambit.in/content/images/2024/08/Untitled--25-_page-0001.jpg" alt="Continuous Integrated Bioprocessing: A Prospective Revolution in Biotherapeutics?"><p>Biotherapeutics, crucial for treating cancer and various diseases, are often financially out of reach for many in India due to the exorbitant costs. These are primarily derived from living cells and offer innovative treatments with promising results. However, their production relies on complex and specialized techniques, leading to increased manufacturing costs. In this landscape, Prof Anurag Singh Rathore from IIT Delhi, winner of the TATA Transformation Prize 2023, has pioneered a project that aims to bring the cost down by 50-75%, which will help accessibility of biotherapeutics to seep into the poorer strata of the country, and indeed, transform the world of biotherapeutics as we know it.</p><p>The current method of manufacturing biotherapeutics, known as Batch Processing, contributes significantly to these high costs. Batch Processing, while widely used, is associated with inefficiencies that drive up production expenses. This approach often leads to longer processing time, low yields, and significant resource utilization, all of which contribute to elevated costs per unit.</p><p>These manufacturing challenges ultimately impact patient accessibility. The high production costs associated with biotherapeutics are often passed down to consumers, rendering these life-saving treatments financially unattainable for many individuals in India, particularly those from lower socioeconomic backgrounds.</p><p>In Batch Processing, materials undergo sequential steps, with each step followed by storage and offline analysis. While this approach has been the norm, it poses significant challenges. Extensive equipment and storage space are required, leading to extended processing times and high manufacturing costs. Consequently, the end result is often an exorbitant price tag attached to biotherapeutic products, making them financially out of reach for many patients. For instance, a single injection vial can cost anywhere from 5,000 to 1,00,000 rupees, further establishing the pressing need for a more cost-effective manufacturing method.</p><p>And there is an alternative to it. Though it is not a new method and has been in talks for the last decade, it is ambitious and is called Continuous Integrated Bioprocessing, a transformative approach that promises to revolutionize biotherapeutics manufacturing. In this innovative process, all unit operations are seamlessly integrated into a continuous flow, eliminating the need for storage and offline analysis after each step. This streamlined approach not only reduces processing time but also significantly cuts down on manufacturing costs. But the more pressing question is if the quality is compromised. “It increases product consistency and quality because you’re not stopping after each step, but we have not done this for biotherapeutics.”, says Prof. Rathore, who has been working in continuous bioprocessing for the past few years. In 2023, he was awarded the prestigious TATA Transformation Prize for developing the first-ever continuous manufacturing facility in academia at IIT Delhi. This lab is a possible precursor to the future of advanced bioprocessing in India.</p><p>Projections suggest that Continuous Integrated Bioprocessing could slash manufacturing costs by an impressive 50-75% and increase productivity by 10-15%, claims Prof Rathore, making biotherapeutics more accessible and affordable for patients worldwide.</p><p>However, the biologics industry is cautiously approaching continuous manufacturing, primarily due to several barriers hindering its widespread adoption by large companies. These obstacles include challenges in establishing a compelling business case, largely attributed to existing legacy infrastructure. These companies have already put a lot of money into using upstream perfusion technology and have big facilities for batch operations. Moving to new facilities and training new staff is expensive, especially for products that are already in the market. Unless they were already planning to update their processes, it might not make financial sense for them to make these changes.</p><p>For instance, AstraZeneca is currently grappling with excess production capacity, making any alterations to its existing manufacturing infrastructure to accommodate continuous integrated bioprocessing financially unfeasible unless the economic benefits can offset both the capital expenses and the costs associated with underutilized existing infrastructure. Reportedly, AstraZeneca is working on establishing a semi-integrated continuous downstream manufacturing facility at a pilot scale, with considerations limited to transitioning the continuous capture step to a commercial scale.</p><p>Other challenges include a shortage of trained personnel and unidentified potential risks such as supply chain issues, difficulties with disposable technologies, and regulatory hurdles associated with deploying critical novel analytics required for continuous processes.</p><p>While there are challenges galore, if they are overcome, continuous manufacturing can offer game-changing advantages for the industry. Building small, modular, and flexible facilities that can easily adapt to production needs and run autonomously is a significant advantage. These facilities have the potential to eliminate the need to invest billions in large-scale manufacturing plants tailored to specific biologic drugs, even before completing clinical trials, which is the current industry norm.</p><p>This method has sizable potential in emerging economies like India due to several key factors. Firstly, individual patients in such countries face substantial out-of-pocket costs for healthcare, with out-of-pocket expenditures contributing to 43% of health expenditures in low-income countries compared to just 13% in high-income countries. Secondly, these nations are experiencing a disproportionate rise in chronic diseases among their populations. For instance, the incidence of diabetes has surged by 58% in Asia and 40% in Africa over the last decade, while Europe has seen a 2% decrease and unchanged in the United States.</p><p>These factors, coupled with population growth in emerging economies, have created rapidly expanding and highly price-sensitive markets for biotherapeutics in these regions. Given the limited presence of large-scale biologics manufacturing facilities in these countries, there exists a significant opportunity to establish integrated continuous bioprocessing plants capable of producing large volumes of biotherapeutic drugs at substantially reduced manufacturing costs. This approach could potentially address the healthcare affordability issues faced by patients in emerging economies like India, making essential biotherapeutic drugs more accessible and affordable.</p><p>“I don’t see the point of this work if it does not translate”, says Prof. Rathore. He is now working in collaboration with manufacturing companies to give his research a more pragmatic outlook.  He aims to integrate the continuous bioprocessing currently underway in his lab at IIT Delhi into practical supply chains through strategic collaborations with pharmaceutical companies. This can make biotherapeutics more affordable, accessible and feasible to patients who really need them to be.</p>]]></content:encoded></item></channel></rss>