Built an image recognition calorie mobile-application scanner.

Role: Developer and Product Designer

Tools: Java, Android Studio, Google Vision, Firebase, Invision


This is an example of how our nutrition label scanner would work.


Before HackDavis, one team member noticed that while she was using her health app to track her fitness and calorie intake, it would require an enormous amount of time and effort to input each item into the phone app.


We decided our solution to tackle this issue would be to scan the nutrition label of each item you ate and use that to count your calories for you.

Since we know that there were datasets containing common recipes, food items, and nutrition scores for each of those items, we believed the more interesting component to build at the hackathon was to use the Google Vision and Firebase APIs to create a computer vision scanner application that could take pictures of the food you ate and parse through the database to quickly pull up the nutrition label information and input into a calorie tracker on your phone.

In this way, each user could compile a profile to compare your “goals” day by day, filled with data visualizations in digestible graphs for quick comparisons. This required each user to sign up for an account with basic information and more personal info like age, weight, height, gender, exercise activity level, and their goal as each of these things factor into being able to achieve their goal.

Product Design

I started with designing the step by step process of how our application would look and run.

The first screen a first-time user would see would be our login information. It would act similar to other log-in screens, and sign-ups would be made right within the application, asking for the user to fill out information as discussed above.

A low-fi of how the login screen would work.

The landing main screen would include a pie chart diagramming the caloric/nutrition intake of the day overall. Below it will contain small sections detailing the information of each meal, with different icons representing food groups/nutrition info that was eaten in that meal.

Swiping right would bring up a search menu that would allow a user to search for other meals with lists of calories and nutrition so a user could plan for their future meals. This database would be generated by our existing database of recipes.

Swiping left would bring up a snapchat-like camera that would identify whether an item you are taking a picture for is food or is a nutrition label. If it was a nutrition label, it would identify the text/numbers listed on it. After the picture would be taken, this would open up a screen listing out caloric and other information about the food item, either filled out by the image of the food (with the info pulled from the database) or image of the nutrition label (a direct sync of the text listed on the nutrition label).

A low-fi of how the application would work.

Some Issues

While ideally we wish we could have created a simple mobile application utilizing the APIs to draw up data, the UI engineering of the application and the visual camera scanner with Google Vision proved more difficult than anticipated. This was also the first time I used Java so I had to quickly pick up how to create the application while learning Android Studio. Definitely not the smartest move to have gone in with preparing myself on how to use the software.

In addition, the wifi at the hackathon was not great and made it difficult to work over the course of the 24 hours. In the end, we just left the hackathon to work at a coffee shop before coming back to demo the project.


Working with a new API I haven’t used before was fairly difficult to do in a fast-paced setting and I definitely wish I could have tried out the software a bit more before the hackathon. I also believe we may have been too ambitious to accomplish both a database and a computer vision project all within 24 hours, as it may have been more productive to have focus on perfecting the scanner or the database, but not at the same time.

You can view our final product on the github repo located here.