Georgia State researchers design artificial vision device for microrobots

Researchers at Georgia State University (GSU) have developed an innovative artificial vision device, referred to as an ‘electric eye,’ for micro-sized robots. This device replicates the biochemical processes involved in natural vision using synthetic methods. Notably, the researchers have made significant advancements in color recognition, which has been a challenging area due to the downsizing limitations of color sensing devices. Traditional color sensors consume substantial physical space and offer less accurate color detection.

The breakthrough achieved by GSU researchers is attributed to a unique vertical stacking architecture in the device’s design. The utilization of van der Waals semiconductors empowers the sensors with precise color recognition capabilities while simplifying the downscaled lens system. By leveraging van der Waals materials, such as the semiconductors, the researchers can finely control critical parameters like band structure, thickness, and others to detect red, green, and blue colors.

The research, published in the scientific journal ACS Nano, primarily focuses on demonstrating the fundamental principles and feasibility of artificial vision using the new micro-sized image sensor. Sidong Lei, the lead researcher and assistant professor of Physics at GSU, emphasizes that vision captures over 80% of information in various domains, including research, industry, medicine, and daily life. The ultimate objective of their research is to develop a micro-scale camera for microrobots capable of navigating narrow and inaccessible spaces, thus opening up new possibilities in medical diagnosis, environmental studies, manufacturing, archaeology, and more.

The technology is currently in the patent pending stage with Georgia State’s Office of Technology Transfer and Commercialization.

Moving on to a different topic, as of April 2023, the NHS waiting list in England had reached a record high with 7.42 million people waiting for treatment. Among them, nearly 3.09 million had been waiting for over 18 weeks, and around 371,000 had been waiting for over a year. The median waiting time for treatment was 13.8 weeks, nearly double the pre-COVID median wait of 7.2 weeks in April 2019.

The AI Diagnostic Fund, one of the primary applications in healthcare, aims to utilize AI tools for analyzing chest X-rays. Chest X-rays are a common diagnostic tool for lung cancer, the leading cause of cancer-related deaths in the UK. With more than 600,000 chest X-rays conducted each month in England, the widespread implementation of AI tools across NHS Trusts can assist clinicians in early cancer detection, leading to improved patient outcomes.

The integration of AI in the NHS has already shown positive results, including reducing the time required to diagnose and treat stroke patients. By enabling faster stroke diagnosis, AI has tripled the chances of patients living independently after a stroke.

The funding provided through the AI Diagnostic Fund will support the implementation of any AI diagnostic tool that NHS Trusts wish to deploy. However, the proposals must demonstrate value for money to receive approval. The government has already invested £123 million in 86 AI technologies, benefiting patients through improved stroke diagnosis, screening, cardiovascular monitoring, and home-based condition management.

The introduction of AI into healthcare aligns with the NHS’s mission to adopt the latest proven technology to enhance patient care and provide value for taxpayers.

Posted in

Aihub Team

Leave a Comment





IGN, the popular gaming website, is introducing an AI tool aimed at simplifying troubleshooting and enhancing gameplay experiences. This innovation has the potential to alleviate the need for specific Google searches and extensive searches through online communities like Reddit. Currently available for IGN's The Legend of Zelda: Tears of the Kingdom guide, the chatbot offers assistance during gameplay. While currently accessible to everyone, IGN accounts will be required in the future to utilize the chatbot. In its current alpha release testing phase, the chatbot draws from various sources, including guides, tips, content published on IGN, and insights from contributors' gameplay experiences. The purpose of this chatbot is to provide swift solutions to intricate challenges and problems, presenting immediate assistance without the need to navigate multiple pages. IGN envisions this guides feature as a comprehensive and convenient solution for gamers seeking quick answers and resolutions. Although primarily targeted towards gamers, the chatbot can serve as a valuable resource for newcomers as well. Questions posed to the chatbot, such as inquiries about the beginner-friendliness of Tears of the Kingdom, yield fitting responses, even though occasional delays in its responses have been observed. IGN's introduction of this AI tool demonstrates a stride towards enhancing gaming experiences, streamlining problem-solving processes, and fostering a more enjoyable and engaging environment for gamers.

IGN launched an AI chatbot for its game guides

Criminals Have Created Their Own ChatGPT Clones

Criminals Have Created Their Own ChatGPT Clones

Amid growing concerns and increased scrutiny, the Detroit Police Department (DPD) faces yet another lawsuit, shedding light on yet another wrongful arrest resulting from a flawed facial recognition match. The latest victim, Porcha Woodruff, an African American woman who was eight months pregnant at the time, has become the sixth individual to step forward and reveal that they were wrongly implicated in a crime due to the controversial technology employed by law enforcement. Woodruff found herself accused of robbery and carjacking, an accusation she found incredulous, especially given her visibly pregnant state. This disturbing trend of wrongful arrests stemming from inaccurate facial recognition matches has raised serious alarms, particularly given that all six reported victims, as identified by the American Civil Liberties Union (ACLU), have been African Americans. Notably, Woodruff's case stands out as the first instance involving a woman. This incident marks the third known instance of a wrongful arrest within the past three years attributed specifically to the Detroit Police Department's reliance on faulty facial recognition technology. In a separate case, Robert Williams has an ongoing lawsuit against the DPD, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), stemming from his wrongful arrest in January 2020 due to the same flawed technology. Phil Mayor, Senior Staff Attorney at ACLU of Michigan, expressed deep concern over the situation, emphasizing that despite being aware of the serious repercussions of using flawed facial recognition technology for arrests, the Detroit Police Department continues to employ it. The usage of facial recognition technology by law enforcement has sparked heated debates due to concerns over accuracy, potential racial bias, and possible infringements on privacy and civil liberties. Studies have consistently shown that these systems exhibit higher error rates when identifying individuals with darker skin tones, disproportionately affecting marginalized communities. Critics argue that relying solely on facial recognition for making arrests poses significant risks, leading to grave consequences for innocent individuals, as exemplified by Woodruff's case. Calls for transparency and accountability have escalated, with civil rights organizations demanding that the Detroit Police Department cease using facial recognition technology until it can be rigorously evaluated and proven to be both unbiased and accurate. As the case unfolds, the public remains vigilant, awaiting the Detroit Police Department's response to mounting pressure to address concerns surrounding the misapplication of facial recognition technology and its impact on the rights and lives of innocent individuals.

Error-prone facial recognition leads to another wrongful arrest

A team of researchers from The University of Texas at Austin has enhanced a commercial virtual reality headset to incorporate brain activity measurement capabilities, enabling the study of human reactions to stimuli like hints and stressors. By integrating a noninvasive electroencephalogram (EEG) sensor into a Meta VR headset, the research team has developed a comfortable and wearable device for long-term use. The EEG sensor captures the brain's electrical signals during immersive virtual reality interactions. This innovation holds diverse potential applications, ranging from aiding individuals with anxiety to assessing the attention and mental stress levels of pilots using flight simulators. Additionally, it allows individuals to perceive the world through a robot's eyes. Nanshu Lu, a professor at the Cockrell School of Engineering's Department of Aerospace Engineering and Engineering Mechanics, who led the research, emphasized the heightened immersion of virtual reality and the ability of their technology to yield improved measurements of brain responses within such environments. Although the combination of VR and EEG sensors exists in the commercial domain, the researchers note that current devices are expensive and less comfortable for users, thus limiting their usage duration and applications. Addressing these challenges, the team designed soft, conductive, and spongy electrodes that overcome issues related to traditional electrodes. These modified VR headsets integrate these electrodes into the top strap and forehead pad, utilizing a flexible circuit with conductive traces similar to electronic tattoos, along with an EEG recording device attached to the headset's rear. This technology aligns with a larger research initiative at UT Austin focused on a robot delivery network, which will also facilitate an extensive study of human-robot interactions. The VR headsets, enhanced with EEG capabilities, will enable observers to experience events from a robot's perspective and simultaneously measure the cognitive load of prolonged observations. To validate the effectiveness of the VR EEG headset, the researchers developed a driving simulation game. Collaborating with José del R. Millán, an expert in brain-machine interfaces, the team created a scenario where users respond to turn commands by pressing a button, and the EEG records brain activity to assess their attention levels. The researchers have initiated preliminary patent procedures for their EEG technology and are open to collaborations with VR companies to integrate their innovation directly into VR headsets. The research team includes experts from various departments such as Electrical and Computer Engineering, Aerospace Engineering and Engineering Mechanics, Mechanical Engineering, Biomedical Engineering, and Artue Associates Inc. in South Korea.

Modified virtual reality tech can measure brain activity

Today in AI: Alibaba open-sources two AI models, AI-based HYRGPT eliminates the first two steps of hiring and more

Today in AI: Alibaba open-sources two AI models, AI-based HYRGPT eliminates the first two steps of hiring and more

AI and Space Exploration: The role of AI in space research and robotics.

AI and Space Exploration: The role of AI in space research and robotics.

AI and Sports Analytics: Enhancing performance analysis and player insights with AI.

AI and Sports Analytics: Enhancing performance analysis and player insights with AI.

AI and Virtual Reality: The synergy between AI and virtual reality technologies.

AI and Virtual Reality: The synergy between AI and virtual reality technologies.

AI for Mental Health: How AI is aiding in early detection and treatment of mental health conditions.

AI for Mental Health: How AI is aiding in early detection and treatment of mental health conditions.

AI in Disaster Response: Utilizing AI for real-time disaster monitoring and relief efforts.

AI in Disaster Response: Utilizing AI for real-time disaster monitoring and relief efforts.

AI in Fashion Design: AI-driven tools for fashion trend forecasting and personalized styling.

AI in Fashion Design: AI-driven tools for fashion trend forecasting and personalized styling.

AI in Human Resources: Streamlining HR processes with AI-driven talent acquisition and management.

AI in Human Resources: Streamlining HR processes with AI-driven talent acquisition and management.

AI in Language Translation: Advancements in AI-driven language translation services.

AI in Language Translation: Advancements in AI-driven language translation services.

AI in Gaming: Exploring AI's role in video game development and player experiences.

AI in Gaming: Exploring AI’s role in video game development and player experiences.

AI and Personal Assistants: The evolution of virtual assistants and AI-powered personal aides.

AI and Personal Assistants: The evolution of virtual assistants and AI-powered personal aides.

What's going on with Google Assistant?

What’s going on with Google Assistant?

UK intelligence agencies seek to weaken data protection safeguards

UK intelligence agencies seek to weaken data protection safeguards

MBA Grads With Startup Ambitions Attracted to Health Care, AI

MBA Grads With Startup Ambitions Attracted to Health Care, AI

IBM and Hugging Face release AI foundation model for climate science

IBM and Hugging Face release AI foundation model for climate science

BSI publishes guidance to boost trust in AI for healthcare

BSI publishes guidance to boost trust in AI for healthcare

Apple plays nice with others for an OpenUSD metaverse

Apple plays nice with others for an OpenUSD metaverse

On the Baroque Art Trail with IBM Watson

On the Baroque Art Trail with IBM Watson

Gaming Industry Know-How Created AMD’s Winning Data Center Strategy

Gaming Industry Know-How Created AMD’s Winning Data Center Strategy

Future Designers Unleash Creativity with AI

Future Designers Unleash Creativity with AI

Blockchain: It Really is a Big Deal

Blockchain: It Really is a Big Deal

AI in Wildlife Conservation: Using AI for wildlife monitoring and anti-poaching efforts.

AI in Wildlife Conservation: Using AI for wildlife monitoring and anti-poaching efforts.

AI in Renewable Energy: Leveraging AI for efficient energy management in green technologies.

AI in Renewable Energy: Leveraging AI for efficient energy management in green technologies.

AI in Precision Agriculture: Optimizing farming practices with AI-driven technologies.

AI in Precision Agriculture: Optimizing farming practices with AI-driven technologies.

AI and Cybersecurity: How AI is enhancing cybersecurity defenses against cyber threats.

AI and Cybersecurity: How AI is enhancing cybersecurity defenses against cyber threats.

Thermal imaging innovation allows AI to see through pitch darkness like broad daylight

Thermal imaging innovation allows AI to see through pitch darkness like broad daylight