Spotter

Spotter

Caption
Created an AI powered tool for first responders that won at CalHacks 10 (October ‘23)

Intro

Me and three of my housemates created Spotter, an intuitive robotics app that empowers first responders with the info they need to act swiftly and efficiently in disaster-stricken areas. Features we built include:
  • autonomous survivor identification
  • multilingual conversational abilities
  • facial and vocal emotional analysis
  • remote control capabilities via a custom-built server and intuitive UI
  • an actionable dashboard providing a summary of critical survivor data, prioritized by urgency
We built this on top of Spot, the Boston Dynamics robot dog. With Spotter, Spot can be deployed into disaster zones to locate and communicate with survivors in their native languages while analyzing visual and auditory cues to assess survivors’ emotional and physical states. This allows first responders to determine the level of urgency and needed actions remotely. Spotter saves resources and lives by informing responders before they step into dangerous environments.
code-spot
ravirileyUpdated Dec 12, 2023

Demos


Spot with our tripod, phone, and speakers duct-taped on:
notion image

Inspiration

Currently, in war-torn and disaster-struck areas, first responders are risking their lives unnecessarily, as they lack the resources needed to accurately and safely assess a disaster zone. By using robotics, we can prevent the risk of human life.

What it does

The SPOT robot has been enabled to be an independent rescue machine that understands human emotion and natural language using AI with a noted ability to detect a language and adjust output as such.

How we built it

  • Flask robot control server integrating the Boston Dynamics SDK that runs from the Spot
  • Flask backend to take in audio and visual input and serve multiple layers of AI analysis & data
    • Hume AI, OpenCV, GPT-4, Bark
  • Next.js frontend that enables remote robot control & serves AI-powered insights

Challenges we ran into

Connecting to and controlling the robot was extremely difficult. I spent the first 4-5 hours of the hackathon grinding system dependencies needed to run Python and the Boston Dynamics SDK on Spot’s internal Linux machine. I ended up building a custom control server that connects directly to Spot and controls its motors so that we wouldn’t have to connect to the internal OS directly.
I also struggled with the cameras onboard the spot. The internal Linux OS lacked the drivers needed for the full color webcam mounted on the front, and the wifi was too slow to install them, so my hacky solution was tapping into Continuity Camera via an iPhone strapped to the robot. We then streamed the iPhone camera feed to the backend and it streamed back the feed with OCR in real-time.
The Hume API was relatively easy to use and we connected this to the live stream of data via the Continuity Cam as well.

Accomplishments that we're proud of

Fixing Spot’s internal Linux dependencies. This is something that blocked all teams from using SPOT and took up most of my time the first day. But by solving this, we enabled the robot to be used by all teams.

What we learned

We learned it is quite complex to combine various tech stacks across a variety of products both hardware and software. We learned to approach these problems by introducing levels of abstraction that would allow different members of the team to work in parallel.

What's next for Spotter

We hope to make Spotter fully autonomous so that Spot can traverse and navigate disaster environments completely independently.