anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration Many investors looking to invest in startup companies are often overwhelmed by the sheer number of investment opportunities, worried that they will miss promising ventures without doing adequate due diligence. Likewise, since startups all present their data in a unique way, it is challenging for investors to directly compare companies and effectively evaluate potential investments. On the other hand, thousands of startups with a lot of potential also lack visibility to the right investors. Thus, we came up with Disruptive as a way to bridge this gap and provide a database for investors to view important insights about startups tailored to specific criteria. ## What it does Disruptive scrapes information from various sources: company websites, LinkedIn, news, and social media platforms to generate the newest possible market insights. After homepage authentication, investors are prompted to indicate their interest in either Pre/Post+ seed companies to invest in. When an option is selected, the investor is directed to a database of company data with search capabilities, scraped from Kaggle. From the results table, a company can be selected and the investor will be able to view company insights, business analyst data (graphs), fund companies, and a Streamlit Chatbot interface. You are able to add more data through a DAO platform, by getting funded by companies looking for data. The investor also has the option of adding a company to the database with information about it. ## How we built it The frontend was built with Next.js, TypeScript, and Tailwind CSS. Firebase authentication was used to verify users from the home page. (Company scraping and proxies for company information) Selenium was used for web scraping for database information. Figma was used for design, authentication was done using Firebase. The backend was built using Flask, StreamLit, and Taipy. We used the Circle API and Hedera to generate bounties using blockchain. SQL and graphQL were used to generate insights, OpenAI and QLoRa were used for semantic/similarity search, and GPT fine-tuning was used for few-shot prompting. ## Challenges we ran into Having never worked with Selenium and web scraping, we found understanding the dynamic loading and retrieval of web content challenging. The measures some websites have against scraping were also interesting to learn and try to work around. We also worked with chat-GPT and did prompt engineering to generate business insights - a task that can sometimes yield unexpected responses from chat-GPT! ## Accomplishments that we're proud of + What we learned We learned how to use a lot of new technology during this hackathon. As mentioned above, we learned how to use Selenium, as well as Firebase authentication and GPT fine-tuning. ## What's next for Disruptive Disruptive can implement more scrapers for better data in terms of insight generation. This would involve scraping from other options than Golden once there is more funding. Furthermore, integration between frontend and blockchain can be improved further. Lastly, we could generate better insights into the format of proposals for clients.
## Inspiration Cryptocurrency interest has piqued in recent years, and along with it, a lot of fear and uncertainty. Few actually know what drives cryptocurrency markets and research in the area is lacking. Many people are turned off by the high volatility and risk involved with investing in cryptocurrencies, especially when compared to their stock market counterparts. We sought to make the mysterious predictable with **CryptAI, a model that predicts cryptocurrency price trajectories and, ultimately, drives highly profitable trades.** ## What it does and how we built it CryptAI puts deep learning-powered investment recommendations at your fingertips. Using Google Cloud’s Natural Language API, we integrate public opinion with domestic (NASDAQ, S&P500) and international market data to predict the cryptocurrency price trajectories. Our algorithm uses a **deep neural network implemented with TensorFlow and Keras**. Our probabilistic **CryptRisk Factor** evaluates risk (0 being low, 10 being high) associated with investing at any given time. **Using CryptAI, we were able to generate a short-term investment strategy for Ethereum with a substantially higher ROI than simulated, uninformed investments.** ## Challenges we ran into The existing cryptocurrency models we encountered rely on limited feature sets, so we knew this was a gap CryptAI had to fill. Therefore, the most time-consuming (yet salient) challenge we encountered was scraping data from the web. We dealt with several server timeouts, missing data, and formatting issues that took us hours to sort out, but in the end, we wanted our training dataset to be as comprehensive as possible to give us the best predictions. ## Accomplishments that we're proud of We’re really proud of our ROI metrics, and honestly did not expect our model to perform as well as it did. We were also uncertain if we’d have enough data for our model (since crypto is not as heavily researched as stock market data), but we managed to parse enough to successfully build and tune its hyperparameters. We’re also really happy with the way our web portal turned out. ## What we learned We learned which loss and optimization functions perform best for a given model. We learned how to make requests to the Google Cloud API, and use the Keras framework for both TensorFlow and Theano architectures. And of course, we got to exercise our creative side with some web development at the end. ## What's next for CryptAI We’re hoping to build up the capabilities of our predictive pipeline with larger training sets and a chatbot. We're excited for people interested in investing in cryptocurrencies to use CryptAI as their decision-making platform.
# Inspiration Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO. # What It Does To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works: * **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO. * **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals. -**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits. * **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO. # How We Built It ### Backend: * **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO. * **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded. * **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently. * **Node.js** to handle API integrations and the interface between the blockchain and our frontend. * **Sepolia** where the smart contracts have been deployed and connected to the web application. ### Frontend: * **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting. * **React** and **Next.js** for building an intuitive, responsive user interface. * **TypeScript** for type safety and better maintainability. * **TailwindCSS** for rapid, visually appealing design. * **Shadcn UI** for accessible and consistent component design. # Challenges We Faced, Solutions, and Learning ### Challenge 1 - Creating a Unique Concept: Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented. **Solution**: After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out. ### Challenge 2 - DAO Governance: Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily. **Solution**: We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics. ## Accomplishments that we're proud of * **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors. * **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests. * **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive. ## What we learned * **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits. * **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible. * **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles. # What’s Next for DAFP Looking ahead, we plan to: * **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups. * **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve). * **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance.
partial
# [eagleEye](https://aliu139.github.io/eagleEye/index.html) Think Google Analytics... but for the real world ![Eagle Eye](https://raw.githubusercontent.com/aliu139/eagleEye/master/res/ee.png "EagleEye") ## Team * [Austin Liu](https://github.com/aliu139) * Benjamin Jiang * Mahima Shah * Namit Juneja ## Al EagleEye is watching out for you. Our program links into already existing CCTV feeds + webcam streams but provides a new wealth of data, simply by pairing the power of machine learning algorithms with the wealth of data already being captured. All of this is then output to a simple-to-use front-end website, where you can not only view updates in real-time but also filter your existing data to derive actionable insights about your customer base. The outputs are incredibly intuitive and no technical knowledge is required to create beautiful dashboards that replicate the effects of complicated SQL queries. All data automatically syncs through Firebase and the site is mobile compatible. ## Technologies Used * AngularJS * Charts.js * OpenCV * Python * SciKit * Pandas * FireBase ## Technical Writeup Image processing is done using OpenCV with Python bindings. This data is posted to Firebase which continuously syncs with the local server. The dashboard pulls this information and displays it using Angular. Every several minutes, the front-end pulls the data that has not already been clustered and lumps it together in a Pandas dataframe for quicker calculations. The data is clustered using a modified SciKit library and reinserted into a separate Firebase. The chart with filters pulls from this database because it is not as essential for these queries to operate in real time. All of the front-end is dynamically generated using AngularJS. All of the data is fetched using API calls to the Firebase. By watching $scope variables, we can have the charts update in real time. In addition, by using charts.js, we also get smoother transitions and a more aesthetic UI. All of the processing that occurs on the filters page is also calculated by services and linked into the controller. All calculations are done on the fly with minimal processing time.
# Frodo ## Motivations It's no secret that access to quality education is dictated by your finances. Traditional tutoring services cost anywhere from $25-$50/hour. We wanted to make a platform that our younger siblings could use, unlike the traditional, clunky, and *annoying-to-use* chatbot that has become ubiquitous today. To level the educational playing field and experiment with an entirely novel AI interface, we developed Frodo. ## Features Frodo is an AI tutor that you can access through your web browser, but unlike other educational AI tools, **there's no chatbox**. Instead, we've built a system that intelligently gives feedback on your work when you need it, seamlessly. This means that you never have to type out your prompt to get help. As avid programmers, one of the first things we wanted to integrate was coding lessons. With Frodo, users can attempt various programming tasks within their web browser. When users are stuck and aren't sure how to continue, Frodo moves around the code editor, points to the problematic line, and hints the user on how to solve the problem. These hints are never complete answer to the coding tasks, only guiding messages that point the user in the right direction. Of course, sometimes you want to initiate conversation with Frodo, and ask a question yourself. Sticking to our vision of leaving the chatbox behind, we've implemented an ultra-low latency speech-to-text Groq pipeline that opens up another channel of communication with Frodo! Simply press `cmd` + `b` to start the voice chat, and start talking! Along with coding, Frodo also supports *any* free response questions on *any* topic. Our demo also includes math and humanities tasks, and our platform can easily support more tasks from any domain. ## Technologies Frodo leverages a number of AI technologies: * OpenAI's GPT-4o for it's advanced reasoning and problem solving * Groq-powered whisper-large-v3 for low latency text-to-speech inference * Langchain for structured output (i.e. being able to identify problematic lines) * Lexica Aperture v4 (Stable Diffusion) for generating the visuals Frodo itself is built on the SvelteKit full-stack web framework. We used TailwindCSS for styling and making custom UI components. The text/code editor is built using Codemirror, and we developed a custom pipeline for code execution. ## Scaling For the hackathon, we had to manually create tasks and their corresponding test cases. However, Frodo is designed to scale, and we can easily import more tasks from various domains. Thanks for reading; I hope you liked our project! <3
# F.A.C.E. (FACE Analytics with Comp-vision Engineering) ## Idea Using computer vision to provide business analytics on customers in brick and mortar stores ## Features * number of customers over time period * general customer demographics, such as age, and gender * ability to see quantity of returning customers * dashboard to view all of this information * support for multiple cameras ## Technology * Python script recording its camera, doing vision analysis, and then sending stats to node.js back-end * express.js web-app providing communication between python scripts, mongodb, and dashboard. * dashboard is built using bootstrap and jQuery
partial
## Inspiration One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work. ## What it does The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts. ## How I built it I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up. ## Challenges I ran into Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me. ## Accomplishments that I'm proud of I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch. ## What I learned Data collection, hosting ML model over cloud, building Chrome extensions with various features ## What's next for Rate The Hack! I want to refine the features and rating scheme
## Inspiration The inspiration came from the fact that modern democracy is flawed. Unfortunately, government lobbying, voter discrimination, misdirection, and undelivered promises are commonplace in democracies worldwide. This led to declining voter trust and less participation by the public during key moments that can seriously affect the power of the people over time. ## What it does Our online web application allows the people's vote to matter once again. It is a platform used for voting on different subjects where the choices are carefully analyzed by experts before being sent to the voters. When an organizer creates a poll, the options are sent to specific experts who are not only experts in the field of that poll's subject but are also of a diverse group to minimize biases. Then, the experts will have the opportunity to commend the organizer's options, stating each choice's pros and cons. After the analysis, the poll will be sent to a representative pool of people that will rank the options using the comment of the experts. This will allow the average voter to understand the different stakes at play better and make a more intelligent decision. The organizer can then check the votes and see the winning side. ## How we built it In the front end, we mostly used React-Bootstrap for the UI. We created multiple pages for the organizers, experts, and voters. In the back end, we used firebase to store the vote data and the experts' advice. It is also connected to a web scrapper that searches for the most relevant experts and voters. That web scrapper uses the Twilio API to text people to make commenting/voting more user-friendly and intuitive. ## Challenges we ran into We ran into a lot of UI issues using React-Bootstrap, especially when it came to connecting the front end with the database. We had to deal with multiple cases where the UI wouldn't load because of a single error between the UI and firebase. ## Accomplishments that we're proud of The web scrapper connected to the Twilio API is one of the accomplishments we are the proudest of. It runs fast and has excellent interconnections with the Twilio API, allowing it to send dozens of participants within seconds. ## What we learned We learned that being a perfectionist can be a hindrance during a hackathon. As we work increasingly on the front end, some of us debated the look of our web application. Although the discussion refined our UI, it also wasted valuable time that could have been used to implement more functionalities. ## What's next for Time2Vote We will later implement different algorithms to count votes and add proper authentication to our system.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
partial
## Inspiration Not all hackers wear capes - but not all capes get washed correctly. Dorming on a college campus the summer before our senior year of high school, we realized how difficult it was to decipher laundry tags and determine the correct settings to use while juggling a busy schedule and challenging classes. We decided to try Google's up and coming **AutoML Vision API Beta** to detect and classify laundry tags, to save headaches, washing cycles, and the world. ## What it does L.O.A.D identifies the standardized care symbols on tags, considers the recommended washing settings for each item of clothing, clusters similar items into loads, and suggests care settings that optimize loading efficiency and prevent unnecessary wear and tear. ## How we built it We took reference photos of hundreds of laundry tags (from our fellow hackers!) to train a Google AutoML Vision model. After trial and error and many camera modules, we built an Android app that allows the user to scan tags and fetch results from the model via a call to the Google Cloud API. ## Challenges we ran into Acquiring a sufficiently sized training image dataset was especially challenging. While we had a sizable pool of laundry tags available here at PennApps, our reference images only represent a small portion of the vast variety of care symbols. As a proof of concept, we focused on identifying six of the most common care symbols we saw. We originally planned to utilize the Android Things platform, but issues with image quality and processing power limited our scanning accuracy. Fortunately, the similarities between Android Things and Android allowed us to shift gears quickly and remain on track. ## Accomplishments that we're proud of We knew that we would have to painstakingly acquire enough reference images to train a Google AutoML Vision model with crowd-sourced data, but we didn't anticipate just how awkward asking to take pictures of laundry tags could be. We can proudly say that this has been an uniquely interesting experience. We managed to build our demo platform entirely out of salvaged sponsor swag. ## What we learned As high school students with little experience in machine learning, Google AutoML Vision gave us a great first look into the world of AI. Working with Android and Google Cloud Platform gave us a lot of experience working in the Google ecosystem. Ironically, working to translate the care-symbols has made us fluent in laundry. Feel free to ask us any questions, ## What's next for Load Optimization Assistance Device We'd like to expand care symbol support and continue to train the machine-learned model with more data. We'd also like to move away from pure Android, and integrate the entire system into a streamlined hardware package.
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
# Omakase *"I'll leave it up to you"* ## Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? ## What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. ## What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. ## How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. ## What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. ## Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. ## Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
winning
## Inspiration After witnessing the power of collectible games and card systems, our team was determined to prove that this enjoyable and unique game mechanism wasn't just some niche and could be applied to a social activity game that anyone could enjoy or use to better understand one another (taking a note from Cards Against Humanity's book). ## What it does Words With Strangers pairs users up with a friend or stranger and gives each user a queue of words that they must make their opponent say without saying this word themselves. The first person to finish their queue wins the game. Players can then purchase collectible new words to build their deck and trade or give words to other friends or users they have given their code to. ## How we built it Words With Strangers was built on Node.js with core HTML and CSS styling as well as usage of some bootstrap framework functionalities. It is deployed on Heroku and also makes use of TODAQ's TaaS service API to maintain the integrity of transactions as well as the unique rareness and collectibility of words and assets. ## Challenges we ran into The main area of difficulty was incorporating TODAQ TaaS into our application since it was a new service that none of us had any experience with. In fact, it isn't blockchain, etc, but none of us had ever even touched application purchases before. Furthermore, creating a user-friendly UI that was fully functional with all our target functionalities was also a large issue and challenge that we tackled. ## Accomplishments that we're proud of Our UI not only has all our desired features, but it also is user-friendly and stylish (comparable with Cards Against Humanity and other genre items), and we were able to add multiple word packages that users can buy and trade/transfer. ## What we learned Through this project, we learned a great deal about the background of purchase transactions on applications. More importantly, though, we gained knowledge on the importance of what TODAQ does and were able to grasp knowledge on what it truly means to have an asset or application online that is utterly unique and one of a kind; passable without infinite duplicity. ## What's next for Words With Strangers We would like to enhance the UI for WwS to look even more user friendly and be stylish enough for a successful deployment online and in app stores. We want to continue to program packages for it using TODAQ and use dynamic programming principles moving forward to simplify our process.
## Inspiration We wanted to create a multiplayer game that would allow anyone to join and participate freely. We couldn't decide on what platform to build for, so we got the idea to create a game that is so platform independent we could call it platform transcendent.. Since the game is played through email and sms, it can be played on any internet-enabled device, regardless of operating system, age, or capability. You could even participate through a public computer in a library, removing the need to own a device altogether! ## What it does The game allows user-created scavenger hunts to be uploaded to the server. Then other users can join by emailing the relevant email address or texting commands to our phone number. The user will then be sent instructions on how to play and updates as the game goes on. ## How I built it We have a Microsoft Azure server backend that implements the Twilio and SendGrid APIs. All of our code is written in python. When you send us a text or email, Twilio and SendGrid notify our server which processes the data, updates the server-side persistent records, and replies to the user with new information. ## Challenges I ran into While sending emails is very straightforward with SendGrid and Twilio works well both for inbound and outbound texts, setting up inbound email turned out to be very difficult due to the need to update the mx registry, which takes a long time to propagate. Also debugging all of the game logic. ## Accomplishments that I'm proud of It works! We have a sample game set-up and you could potentially win $5 Amazon Gift Cards! ## What I learned Working with servers is a lot of work! Debugging code on a computer that you don't have direct access to can be quite a hassle. ## What's next for MailTrail We want to improve and emphasize the ability for users to create their own scavenger hunts.
## Inspiration During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before. ## What It Does Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions. ## How We Built It We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML. Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API. We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions. After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React. ## Challenges We Ran Into From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models. We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities. ## What We Learned We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time. On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work) ## What’s Next for Hack2 The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward.
partial
# [ugly.video](https://ugly.video/): One-click Zoom meetings in the browser ### Why is Zoom bad? 1. **Everyone is tired of holding meetings and taking classes over zoom calls. It's old and boring af.** 2. **Slow meeting startup** * Click meeting link * Browser asks to open link in Zoom app * Click 'Allow' * Desktop app takes a second to load * Click 'Join with computer audio' to finally start hearing feedback 3. **Not optimal for creating spontaneous hangout rooms with friends.** * Unnecessary friction: why does it require **3 clicks** to obtain the invite link??? + Participants + Invite + Copy Invite Link 4. **Sharing images is tedious** * Also lots of friction, have to send links over chat * Feels so disconnected when everyone views things at different times * Screensharing is a possible solution but also brings in more friction + How many times we all heard, **"Hey @host, can you enable participant screensharing?"** ### How does [ugly.video](https://ugly.video/) solve these issues? 1. **Chill aesthetic design** * Comic Sans font :D * **Surprise!** button populates your whiteboard with **memes** 2. **For non-hosts: Clicking on a room link instantly places you into the meeting** * **Zero friction** for entering meeting * No desktop apps to open: video streamed through browser + Also more secure * Automatically sets up your audio and video connection * Only 1 extra click needed if your camera/mic permissions aren't yet enabled 3. **For hosts: Instantly create a meeting by visiting <https://ugly.video/>** * Button to copy meeting link is available immediately with **1 click** 4. **Automatic whiteboard for drawing and sharing images** * No friction for enabling whiteboards as the whiteboard is available by default **Note:** Zoom is obviously a very well-made product, but there are many specific video conferencing use cases that could have significant improvements to UI/UX. Zoom is very optimized for business usage but there is a growing trend for more **spontaneous** and **engaging** forms of online socialization (i.e. **Clubhouse**). [ugly.video](https://ugly.video/) is optimized to **decrease friction** in online meetings by **minimizing click flows** and **promoting cheerfulness** in it's design. ### How I built it WebRTC (Web Real Time Communication) is an open source standard that lets browsers communicate directly with one another. [ugly.video](https://ugly.video/) takes advantage of WebRTC to stream video and send data between meeting participants without using a complex backend server. This also improves **security** and **privacy** for our users as they control who can view their data. I used [Peer.js](https://peerjs.com/) (a wrapper library for WebRTC API's) to facilitate the connections between browsers. For the whiteboard, I used [Fabric.js](http://fabricjs.com/) to easily interface with canvas elements. ### Challenges we ran into 1. The most challenging part was managing the connections between browsers that send video and whiteboard data. Each connection consists of an **initiator** and a **recipient**, and this requires us to deal with 4 possible scenarios: * Initiator sending data * Initiator receiving data * Recipient sending data * Recipient receiving dataMismanaging one of these scenarios was my most common logic error and took me a long time to initially debug because I had to walk through each line of code step by step. 2. The second most challenging thing I had to overcome was reading through the [Fabric.js](http://fabricjs.com/) documentation to figure out how their **Path object** was implemented. The path object is created whenever someone draws a line on the canvas/whiteboard, and I wanted to send data through WebRTC to the other meeting participants so they could reconstruct the path on their end. I had a weird bug where the data was being sent correctly but the reconstructed path wouldn't appear on the recipient's screen. This bug ended up taking me a while to fix because I had to dig into the Fabric.js source code to figure out exactly what was causing the error. Turns out I just needed to deconstruct the path data into a string format instead of in object format was being incorrectly analyzed by the Path constructor. ### Accomplishments that we're proud of The **Surprise!** button that populates your background with memes ### What we learned * How to install Node.js and npm properly lol * WebRTC, Peer.js, and Fabric.js * Hosting an app on Google Cloud App Engine * Setting up domain names for our website through Google Domains ## What's next for Ugly Video I want to enable all content (video and images) to be moveable along the screen. Right now, the videos are shown in an adaptive grid and the images are dispersed randomly. The critical technical challenge will be how to most optimally transmit content updates to the other meeting participants in a way that reduces memory load.
## Inspiration Almost all undergraduate students, especially at large universities like the University of California Berkeley, will take a class that has a huge lecture format, with several hundred students listening to a single professor speak. At Berkeley, students (including three of us) took CS61A, the introductory computer science class, alongside over 2000 other students. Besides forcing some students to watch the class on webcasts, the sheer size of classes like these impaired the ability of the lecturer to take questions from students, with both audience and lecturer frequently unable to hear the question and notably the question not registering on webcasts at all. This led us to seek out a solution to this problem that would enable everyone to be heard in a practical manner. ## What does it do? *Questions?* solves this problem using something that we all have with us at all times: our phones. By using a peer to peer connection with the lecturer’s laptop, a student can speak into their smartphone’s microphone and have that audio directly transmitted to the audio system of the lecture hall. This eliminates the need for any precarious transfer of a physical microphone or the chance that a question will be unheard. Besides usage in lecture halls, this could also be implemented in online education or live broadcasts to allow participants to directly engage with the speaker instead of feeling disconnected through a traditional chatbox. ## How we built it We started with a fail-fast strategy to determine the feasibility of our idea. We did some experiments and were then confident that it should work. We split our working streams and worked on the design and backend implementation at the same time. In the end, we had some time to make it shiny when the whole team worked together on the frontend. ## Challenges we ran into We tried the WebRTC protocol but ran into some problems with the implementation and the available frameworks and the documentation. We then shifted to WebSockets and tried to make it work on mobile devices, which is easier said than done. Furthermore, we had some issues with web security and therefore used an AWS EC2 instance with Nginx and let's encrypt TLS/SSL certificates. ## Accomplishments that we're (very) proud of With most of us being very new to the Hackathon scene, we are proud to have developed a platform that enables collaborative learning in which we made sure whatever someone has to say, everyone can hear it. With *Questions?* It is not just a conversation between a student and a professor in a lecture; it can be a discussion between the whole class. *Questions?* enables users’ voices to be heard. ## What we learned WebRTC looks easy but is not working … at least in our case. Today everything has to be encrypted … also in dev mode. Treehacks 2020 was fun. ## What's next for *Questions?* In the future, we could integrate polls and iClicker features and also extend functionality for presenters and attendees at conferences, showcases, and similar events. \_ Questions? \_ could also be applied even broader to any situation normally requiring a microphone—any situation where people need to hear someone’s voice.
## Inspiration We were inspired by apps such as GamePigeon (which allows you to play games together over text messages) and Gather (which allows you to have virtual interactions RPG style). We also drew inspiration from Discord’s voice chat activity feature that allows you to play games together in a call. ## What it does RoundTable is a virtual meeting platform that allows people to connect in the same way they would at a real-life round table discussion: everyone in a room can manipulate and interact with objects or games on the table as they please. However, we take this a step further by providing an easy-to-use API for any developer to create an activity plugin by submitting a single JavaScript file. ## How we built it We built the client using React and MUI, with Markdown being used to render chat messages. The client is mainly responsible for rendering events that happen on the roundtable and reporting the user’s actions to the server through Socket.io. The server is built with Typescript and also uses Socket.io to establish communication with the client. The server is responsible for managing the game states of specific instances of plugins as well as controlling all aspects of the rooms. ## Challenges we ran into One challenge we ran into was balancing participating in events and workshops during the hackathon with working on our project. Initially, we had a very ambitious idea for the final product and thought that it was possible if we worked on it as much as possible. However, it soon became clear that in doing so, we would be jeopardizing our own experiences at HTN and we should aim to have a compromise instead. So, we scaled down our idea and in return, were able to participate in many of the amazing events such as real-life among us and the silent disco. Another challenge was that our team members had a few disagreements about the design and implementation of RoundTable. For example, we had two proposed ideas for how custom plugins could work. One of our group members insisted that we should go with an implementation involving the use of embedded iframes while the others wanted to use direct source files manipulating a canvas. Although we wasted a lot of time debating these issues, eventually a collective decision was reached. ## Accomplishments that we're proud of We’re proud of the fact that we managed our time much better this time around than in previous hackathons. For example, we were able to decide on an idea ahead of time, flesh it out somewhat, and learn some useful technologies as opposed to previous hackathons when we had to rush to come up with an idea on the day of. Also, we clearly divided our duties and each worked on an equally important part of the application. ## What we learned Through doing this project, we learned many technical things about creating an RTC web application using SocketIO (which most of us hadn’t used before), React, and Typescript. We also learned to use Material UI together with CSS stylesheets to develop an attractive front-end for the app and to design a robust plugin system that integrates with p5.js to create responsive modules. In addition, we learned many things about collaboration and how to work better as a team. Looking back, we would not spend as much time debating the advantages and disadvantages of a specific design choice and instead pick one and prepare to implement it as much as possible. ## What's next for RoundTable Although we are satisfied with what we were able to accomplish in such a short time span, there are still many things that we are looking to add to RoundTable in the future. First of all, we will implement a voice and video chat to improve the level of connection between the participants of the roundtable. Also, we will improve our plugin API by making it more flexible (allowing for modules such as playing a shared video) and an account system so that rooms can be organized easily. Finally, we will improve the security of the application by sandboxing custom modules and doing end-to-end encryption.
partial
## Inspiration Have you ever had to wait in long lines just to buy a few items from a store? Not wanted to interact with employees to get what you want? Now you can buy items quickly and hassle free through your phone, without interacting with any people whatsoever. ## What it does CheckMeOut is an iOS application that allows users to buy an item that has been 'locked' in a store. For example, clothing that have the sensors attached to them or items that are physically locked behind glass. Users can scan a QR code or use ApplePay to quickly access the information about an item (price, description, etc.) and 'unlock' the item by paying for it. The user will not have to interact with any store clerks or wait in line to buy the item. ## How we built it We used xcode to build the iOS application, and MS Azure to host our backend. We used an intel Edison board to help simulate our 'locking' of an item. ## Challenges I ran into We're using many technologies that our team is unfamiliar with, namely Swift and Azure. ## What I learned I've learned not underestimate things you don't know, to ask for help when you need it, and to just have a good time. ## What's next for CheckMeOut Hope to see it more polished in the future.
## Inspiration On our way to PennApps, our team was hungrily waiting in line at Shake Shack while trying to think of the best hack idea to bring. Unfortunately, rather than being able to sit comfortably and pull out our laptops to research, we were forced to stand in a long line to reach the cashier, only to be handed a clunky buzzer that countless other greasy fingered customer had laid their hands on. We decided that there has to be a better way; a way to simply walk into a restaurant, be able to spend more time with friends, and to stand in line as little as possible. So we made it. ## What it does Q'd (pronounced queued) digitizes the process of waiting in line by allowing restaurants and events to host a line through the mobile app and for users to line up digitally through their phones as well. Also, it functions to give users a sense of the different opportunities around them by searching for nearby queues. Once in a queue, the user "takes a ticket" which decrements until they are the first person in line. In the meantime, they are free to do whatever they want and not be limited to the 2-D pathway of a line for the next minutes (or even hours). When the user is soon to be the first in line, they are sent a push notification and requested to appear at the entrance where the host of the queue can check them off, let them in, and remove them from the queue. In addition to removing the hassle of line waiting, hosts of queues can access their Q'd Analytics to learn how many people were in their queues at what times and learn key metrics about the visitors to their queues. ## How we built it Q'd comes in three parts; the native mobile app, the web app client, and the Hasura server. 1. The mobile iOS application built with Apache Cordova in order to allow the native iOS app to be built in pure HTML and Javascript. This framework allows the application to work on both Android, iOS, and web applications as well as to be incredibly responsive. 2. The web application is built with good ol' HTML, CSS, and JavaScript. Using the Materialize CSS framework gives the application a professional feel as well as resources such as AmChart that provide the user a clear understanding of their queue metrics. 3. Our beast of a server was constructed with the Hasura application which allowed us to build our own data structure as well as to use the API calls for the data across all of our platforms. Therefore, every method dealing with queues or queue analytics deals with our Hasura server through API calls and database use. ## Challenges we ran into A key challenge we discovered was the implementation of Cordova and its associated plugins. Having been primarily Android developers, the native environment of the iOS application challenged our skills and provided us a lot of learn before we were ready to properly implement it. Next, although a less challenge, the Hasura application had a learning curve before we were able to really us it successfully. Particularly, we had issues with relationships between different objects within the database. Nevertheless, we persevered and were able to get it working really well which allowed for an easier time building the front end. ## Accomplishments that we're proud of Overall, we're extremely proud of coming in with little knowledge about Cordova, iOS development, and only learning about Hasura at the hackathon, then being able to develop a fully responsive app using all of these technologies relatively well. While we considered making what we are comfortable with (particularly web apps), we wanted to push our limits to take the challenge to learn about mobile development and cloud databases. Another accomplishment we're proud of is making it through our first hackathon longer than 24 hours :) ## What we learned During our time developing Q'd, we were exposed to and became proficient in various technologies ranging from Cordova to Hasura. However, besides technology, we learned important lessons about taking the time to properly flesh out our ideas before jumping in headfirst. We devoted the first two hours of the hackathon to really understand what we wanted to accomplish with Q'd, so in the end, we can be truly satisfied with what we have been able to create. ## What's next for Q'd In the future, we're looking towards enabling hosts of queues to include premium options for users to take advantage to skip lines of be part of more exclusive lines. Furthermore, we want to expand the data analytics that the hosts can take advantage of in order to improve their own revenue and to make a better experience for their visitors and customers.
## Inspiration The inspiration for this project was drawn from the daily experiences of our team members. As post-secondary students, we often make purchases for our peers for convenience, yet forget to follow up. This can lead to disagreements and accountability issues. Thus, we came up with the idea of CashDat, to alleviate this commonly faced issue. People will no longer have to remind their friends about paying them back! With the available API’s, we realized that we could create an application to directly tackle this problem. ## What it does CashDat is an application available on the iOS platform that allows users to keep track of who owes them money, as well as who they owe money to. Users are able to scan their receipts, divide the costs with other people, and send requests for e-transfer. ## How we built it We used Xcode to program a multi-view app and implement all the screens/features necessary. We used Python and Optical Character Recognition (OCR) built inside Google Cloud Vision API to implement text extraction using AI on the cloud. This was used specifically to draw item names and prices from the scanned receipts. We used Google Firebase to store user login information, receipt images, as well as recorded transactions and transaction details. Figma was utilized to design the front-end mobile interface that users interact with. The application itself was primarily developed with Swift with focus on iOS support. ## Challenges we ran into We found that we had a lot of great ideas for utilizing sponsor APIs, but due to time constraints we were unable to fully implement them. The main challenge was incorporating the Request Money option with the Interac API into our application and Swift code. We found that since the API was in BETA made it difficult to implement it onto an IOS app. We certainly hope to work on the implementation of the Interac API as it is a crucial part of our product. ## Accomplishments that we're proud of Overall, our team was able to develop a functioning application and were able to use new APIs provided by sponsors. We used modern design elements and integrated that with the software. ## What we learned We learned about implementing different APIs and overall IOS development. We also had very little experience with flask backend deployment process. This proved to be quite difficult at first, but we learned about setting up environment variables and off-site server setup. ## What's next for CashDat We see a great opportunity for the further development of CashDat as it helps streamline the process of current payment methods. We plan on continuing to develop this application to further optimize user experience.
winning
Countless avid programmers around the world suffer with Repetitive Strain Injury (RSI) due to hours upon hours of continuous coding. Even more people live with accessibility issues, hindering them from performing motor actions like typing, writing and overall utility of the computer. Vocode can change the status quo. Vocode integrates the Nuance Natural Language Understanding (MLU) API, Azure web server and a dotTech domain to form a polished speech to text coding platform. Simply speak into the microphone and Vocode will interpret the words into executable code. According to a commentor on i-programmer.info, the average english speaker speaks at approximately 110–150 wpm. Audiobooks are recorded at 150-160 wpm. Auctioneers can speak at about 250 wpm, and the fastest speaking policy debaters speak from 350-500 wpm. For comparison an average computer typist can produce 50-80 wpm while the record typists can achieve up to 150 wpm. Gone are the days when people must waste hours at inefficiently typing code. Welcome to the new world. Welcome to Vocode.
## Inspiration McMaster's SRA presidential debate brought to light the issue of garbage sorting on campus. Many recycling bins were contaminated and were subsequently thrown into a landfill. During the project's development, we became aware of the many applications of this technology, including sorting raw materials, and manufacturing parts. ## What it does The program takes a customizable trained deep learning model that can categorize over 1000 different classes of objects. When an object is placed in the foreground of the camera, its material is determined and its corresponding indicator light flashes. This is to replicate a small-scale automated sorting machine. ## How we built it To begin, we studied relevant modules of the OpenCV library and explored ways to implement them for our specific project. We also determined specific categories/materials for different classes of objects to build our own library for sorting. ## Challenges we ran into Due to time constraints, we were unable to train our own data set for the specific objects we wanted. Many pre-trained models are designed to run on much stronger hardware than a raspberry pi. Being limited to pre-trained databases added a level of difficulty for the software to detect our specific objects. ## Accomplishments that we're proud of The project actually worked and was surprisingly better than we had anticipated. We are proud that we were able to find a compromise in the pre-trained model and still have a functioning application. ## What we learned We learned how to use OpenCV for this application, and the many applications of this technology in the deep learning and IoT industry. ## What's next for Smart Materials Sort We'd love to find a way to dynamically update the training model (supervised learning), and try the software with our own custom models.
## Inspiration We wanted to create something that will impact the community in a meaningful way, and decided to make something that will introduce more people to the world of programming, by **breaking the accessibility barrier**. Everyone programs by physically typing on a keyboard without a second thought, but what about those that are unable or find it hard to interact with a keyboard? That's why we've decided to build Speech2Program for all the people out there that don't yet know they love programming! ## What it does Speech2Program is a VS Code extension that takes in voice input from the user's microphone, parses the message and performs actions within the editor, ultimately helping the user program via voice. The user is able to write classes, functions, move the cursor, select text and many more - all by voice! Here are a few of the things the user could say: **"New function test"** The editor would then type out: ``` def test: ``` **"For x in range 0 to 100"** ``` for x in range(0,100): ``` **"Jump to line 160"\* \**"undo"* \*\*"new line"** ## How I built it This VS Code Extension was mainly built via Microsoft's VS Code extension's API and Google Cloud's speech-to-text API. Input from the user's microphone ## Challenges I ran into The biggest challenge of the project is our understanding of NLP technologies and frameworks. None of us has any prior experience working with NLP, so we had a lot of trouble at the start of the project trying to learn, understand, and implement it. However, we were unable to get a satisfactory working product, and thus decided to use Google's Cloud Speech-To-Text with customized Context Phrases to get maximum accuracy and working functionality over language processing for this time-limited project. ## Accomplishments that I'm proud of We are extremely proud that we are able to develop the VS Code extension all under a day being that we were all new to the VS Code extension API. We learned a lot during this process of reading VS Code's and Google Cloud's documentation. Ultimately, we are proud that we are able to use our extension to code a relatively simple program all via voice. ## What I learned We attended the hackathon with the objective of learning and developing software that would make an impact on the community. After coming up with the idea of Speech2Program, we realized that we were new to all the technologies that we had to use in creating the VS Code extension. A whole lot of time was spent reading and learning VS Code extension API, understanding how to interact with the editor, and trigger shortcuts. In addition, we learned how to use the Google Cloud's speech to text api to take input from our microphones and convert them into readable text. We also had to learn one of the Google technologies - GRPC, which is a REST api alternative made by Google that is faster at receiving and sending data. ## What's next for Speech2Program Customization and optimisation of Google's Cloud Speech-To-Text service will allow us to create a "programmer" profile that is specifically designed to understand and interpret coding lingo, which will greatly enhance the usability and efficiency of Speech2Program. Specifically, we wish to use more NLP to effectively break down user sentences into coding chunks with greater accuracy to increase flexibility and move away from the naive and limited approach. Expanding Speech2Program to include more programming languages is also a task that we have planned, to allow users more freedom in their project language.
partial
## Inspiration We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online. ## What it does Recommends sustainable and local business alternatives when shopping online. ## How we built it Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB. ## Challenges we ran into Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging. ## Accomplishments that we're proud of Creating a working product! Successful end-to-end data pipeline. ## What we learned We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB. ## What's next for Conscious Consumer First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier.
## Inspiration Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it. ## What it does We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels. The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading. We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing. Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article. ## How we built it **Frontend** We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend. **Backend** ## Challenges we ran into It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it. ## Accomplishments that we're proud of 1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment 2) (co:here) Developed a high-performing classification model to classify news articles by topic 3) Spun up a cockroach db node and client and used it to store all of our classification data 4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema. 5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content. 6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding. ## What we learned 1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users 2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience. 3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case. ## What's next for goodNews 1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not. 2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
## Inspiration This project was inspired by a personal anecdote. Two of the teammates, A and B, were hanging out in friend C’s dorm room. When it was time to leave, teammate B needed to grab his bag from teammate A’s dorm room. However, to their dismay, teammate A accidentally left her keycard in friend C’s dorm room, who left to go to a party. This caused A and B to wait for hours for C to return. This event planted a seed for this project in the back of teammates A and B’s minds, hoping to bring convenience to students’ lives and eliminate the annoyance of forgetting their keycards and being unable to enter their dorm rooms. ## What it does This device aims to automate the dorm room lock by allowing users to control the lock using a mobile application. The door lock’s movement is facilitated by a 3D-printed gear on a bar, and the gear is attached to a motor, controlled by an Arduino board. There are two simple steps to follow to enter the dorm. First, a phone needs to be paired with the device through Bluetooth. Both the “Pair with Device” button in the app and the button on the Bluetooth Arduino board are clicked. This only needs to be done for the first time the user is using this device. Once a connection is established between the Bluetooth board and the mobile app, the user can simply click the “Unlock door” button on the app, facilitating the communication between the Bluetooth board and the Motor board, causing the gear to rotate and subsequently causing the rod to bring down the door handle, unlocking the door. ## How we built it We used Android Studio to develop the mobile application in Java. The gear and bar were designed using Fusion360 and 3D-printed. Two separate Arduino boards were attached to the ESP32-S Bluetooth module and the motor attached to the gear, respectively, and the boards are controlled by the software part of an Arduino program programmed in C++. PlatformIO was used to automate the compilation and linking of code between hardware and software components. ## Challenges we ran into Throughout the build process, we encountered countless challenges, with a few of the greatest being understanding how the two Arduino boards communicate, figuring out the deployment mechanism of the ESP32-S module when our HC-05 was dysfunctional, and maintaining the correct circuit structure for our motor and LCD. ## Accomplishments that we're proud of Many of our greatest accomplishments stemmed from overcoming the challenges that we faced. For example, the wiring of the motor circuit was a major concern in the initial circuit setup process: following online schematics on how to wire the Nema17 motor, the motor did not perform full rotations, and thus would not have the capability to be integrated with other hardware components. This motor is a vital component for the workings of our mechanism, and with further research and diligence, we discovered that the issue was related to our core understanding of how the circuit performs and obtaining the related drivers needed to perform our tasks. This was one of our most prominent hardware accomplishments as it functions as the backbone for our mechanism. A more lengthy, software achievement we experienced was making the ESP32-S microcontroller functional. ## What we learned For several members of our group, this marked their initial exposure to GitHub within a collaborative environment. Given that becoming acquainted with this platform is crucial in many professional settings, this served as an immensely beneficial experience for our novice hackers. Additionally, for the entire team, this was the first experience operating with Bluetooth technology. This presented a massive learning curve, challenging us to delve into the intricacies of Bluetooth, understand its protocols, and navigate the complexities of integrating it into our project. Despite the initial hurdles, the process of overcoming this learning curve fostered a deeper understanding of wireless communication and added a valuable skill set to our collective expertise. Most importantly, however, we learned that with hard work and perseverance, even the most daunting challenges can be overcome. Our journey with GitHub collaboration and Bluetooth integration served as a testament to the power of persistence and the rewards of pushing beyond our comfort zones. Through this experience, we gained not only technical skills but also the confidence to tackle future projects with resilience and determination. ## What's next for Locked In Some future steps for Locked In would hopefully be to create a more robust authentication system through Firebase. This would allow users to sign in via other account credentials, such as Email, Facebook, and Google, and permit recognized accounts to be stored and managed by a centralized host. This objective would not only enhance security but also streamline user management, ensuring a seamless and user-friendly experience across various platforms and authentication methods. Another objective of Locked In is to enhance the speed of Bluetooth connections, enabling users to fully leverage the convenience of not needing a physical key or card to access their room. This enhancement would offer users a faster and smoother experience, simplifying the process of unlocking doors and ensuring swift entry. One feature that we did not finish implementing was the gyroscope, which automatically detects when the door is open and
winning
## Inspiration Are you out in public but scared about people standing too close? Do you want to catch up on the social interactions at your cozy place but do not want to endanger your guests? Or you just want to be notified as soon as you have come in close contact to an infected individual? With this app, we hope to provide the tools to users to navigate social distancing more easily amidst this worldwide pandemic. ## What it does The Covid Resource App aims to bring a one-size-fits-all solution to the multifaceted issues that COVID-19 has spread in our everyday lives. Our app has 4 features, namely: - A social distancing feature which allows you to track where the infamous "6ft" distance lies - A visual planner feature which allows you to verify how many people you can safely fit in an enclosed area - A contact tracing feature that allows the app to keep a log of your close contacts for the past 14 days - A self-reporting feature which enables you to notify your close contacts by email in case of a positive test result ## How we built it We made use primarily of Android Studio, Java, Firebase technologies and XML. Each collaborator focused on a task and bounced ideas off of each other when needed. The social distancing feature functions based on a simple trigonometry concept and uses the height from ground and tilt angle of the device to calculate how far exactly is 6ft. The visual planner adopts a tactile and object-oriented approach, whereby a room can be created with desired dimensions and the touch input drops 6ft radii into the room. The contact tracing functions using Bluetooth connection and consists of phones broadcasting unique ids, in this case, email addresses, to each other. Each user has their own sign-in and stores their keys on a Firebase database. Finally, the self-reporting feature retrieves the close contacts from the past 14 days and launches a mass email to them consisting of quarantining and testing recommendations. ## Challenges we ran into Only two of us had experience in Java, and only one of us had used Android Studio previously. It was a steep learning curve but it was worth every frantic google search. ## What we learned * Android programming and front-end app development * Java programming * Firebase technologies ## Challenges we faced * No unlimited food
## ✨ Inspiration Quarantining is hard, and during the pandemic, symptoms of anxiety and depression are shown to be at their peak 😔[[source]](https://www.kff.org/coronavirus-covid-19/issue-brief/the-implications-of-covid-19-for-mental-health-and-substance-use/). To combat the negative effects of isolation and social anxiety [[source]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7306546/), we wanted to provide a platform for people to seek out others with similar interests. To reduce any friction between new users (who may experience anxiety or just be shy!), we developed an AI recommendation system that can suggest virtual, quarantine-safe activities, such as Spotify listening parties🎵, food delivery suggestions 🍔, or movie streaming 🎥 at the comfort of one’s own home. ## 🧐 What it Friendle? Quarantining alone is hard😥. Choosing fun things to do together is even harder 😰. After signing up for Friendle, users can create a deck showing their interests in food, games, movies, and music. Friendle matches similar users together and puts together some hangout ideas for those matched. 🤝💖 I ## 🧑‍💻 How we built Friendle? To start off, our designer created a low-fidelity mockup in Figma to get a good sense of what the app would look like. We wanted it to have a friendly and inviting look to it, with simple actions as well. Our designer also created all the vector illustrations to give the app a cohesive appearance. Later on, our designer created a high-fidelity mockup for the front-end developer to follow. The frontend was built using react native. ![arch diagram](https://raw.githubusercontent.com/christinetrac/uofthacks-frontend/master/docs/uofthacks-backend.jpg) We split our backend tasks into two main parts: 1) API development for DB accesses and 3rd-party API support and 2) similarity computation, storage, and matchmaking. Both the APIs and the batch computation app use Firestore to persist data. ### ☁️ Google Cloud For the API development, we used Google Cloud Platform Cloud Functions with the API Gateway to manage our APIs. The serverless architecture allows our service to automatically scale up to handle high load and scale down when there is little load to save costs. Our Cloud Functions run on Python 3, and access the Spotify, Yelp, and TMDB APIs for recommendation queries. We also have a NoSQL schema to store our users' data in Firebase. ### 🖥 Distributed Computer The similarity computation and matching algorithm is powered by a node.js app which leverages the Distributed Computer for parallel computing. We encode the user's preferences and Meyers-Briggs type into a feature vector, then compare similarity using cosine similarity. The cosine similarity algorithm is a good candidate for parallelizing since each computation is independent of the results of others. We experimented with different strategies to batch up our data prior to slicing & job creation to balance the trade-off between individual job compute speed and scheduling delays. By selecting a proper batch size, we were able to reduce our overall computation speed by around 70% (varies based on the status of the DC network, distribution scheduling, etc). ## 😢 Challenges we ran into * We had to be flexible with modifying our API contracts as we discovered more about 3rd-party APIs and our front-end designs became more fleshed out. * We spent a lot of time designing for features and scalability problems that we would not necessarily face in a Hackathon setting. We also faced some challenges with deploying our service to the cloud. * Parallelizing load with DCP ## 🏆 Accomplishments that we're proud of * Creating a platform where people can connect with one another, alleviating the stress of quarantine and social isolation * Smooth and fluid UI with slick transitions * Learning about and implementing a serverless back-end allowed for quick setup and iterating changes. * Designing and Creating a functional REST API from scratch - You can make a POST request to our test endpoint (with your own interests) to get recommended quarantine activities anywhere, anytime 😊 e.g. `curl -d '{"username":"turbo","location":"toronto,ca","mbti":"entp","music":["kpop"],"movies":["action"],"food":["sushi"]}' -H 'Content-Type: application/json' ' https://recgate-1g9rdgr6.uc.gateway.dev/rec'` ## 🚀 What we learned * Balancing the trade-off between computational cost and scheduling delay for parallel computing can be a fun problem :) * Moving server-based architecture (Flask) to Serverless in the cloud ☁ * How to design and deploy APIs and structure good schema for our developers and users ## ⏩ What's next for Friendle * Make a web-app for desktop users 😎 * Improve matching algorithms and architecture * Adding a messaging component to the app
## Have a look! <https://www.tricle.org/> ## Inspiration It started when we met back in February at Treehacks 2019. We joined forces as a result of our collective restlessness on the effects of networks on corruption. As a first step, we decided to help citizens form informed decisions by collecting and summarizing large amounts of data and visualizing the relationships. Our inspiration for tackling this problem was a highly visible recent instance of this corruption. Hurricane Maria ravaged Puerto Rico in September 2017, killing thousands, spending tens of thousands of public funds, and completely destroying the infrastructure of the island. To make this terrible situation even worse, the government-run utility company of Puerto Rico, PREPA, issued an absurd contract to rebuild the island's electrical infrastructure the same day as the hurricane hit. The $300M contract was won by Whitefish Energy Holdings LLC, a Montana company with only 2 employees, which had been formed just 6 months prior. It was glaringly apparent that Whitefish was incapable of handling this massively important contract and information came out in the months after confirming that there were shady dealings behind this contract award. Not only was this a money sink, but this added further duress to the citizens of Puerto Rico, some of whom waited nearly a year (328 days) to get power back due to the ineptness of Whitefish and ensuing re-compete of the contract. For more information about government corruption and the Whitefish incident see: <https://www.washingtonpost.com/news/posteverything/wp/2017/10/30/the-whitefish-contract-in-puerto-rico-shows-the-real-cost-of-bad-government/?noredirect=on&utm_term=.6eb02c1219fc> <http://www.oecd.org/gov/ethics/Corruption-Public-Procurement-Brochure.pdf> While further developing this project, we also acknowledged the power of connecting entities in stories that were not only specifically related to corruption. And this is how *tricle* came to be. ## What it does It provides easily parsable, digested information on relationships among important figures in news stories. Knowing how politicians and companies relate to corruption cases eases participation in our democracy by becoming more responsible citizens. We used natural language processing from the Google Cloud API to determine relationships between key entities in a plot using news articles written about the topic. Our program iterates through publications on a story, pulling out key names and places, and developing a connection network between entities. This network of connection is then displayed graphically so that the user can understand how the key players in a plot are connected. ## How we built it We used natural language processing from the Google Cloud API to build the model for the connections display. The graph of the connections was made using Python. ## What's next for *tricle* Awareness is one of the most powerful tools to promote social justice, ensure equality, and upkeep the quality of a democracy. On the contrary, lack of access to information reinforces old habits, old feedback loops, and vicious networks, both in individual and societal levels. Understanding networks may help understanding better the corruption in top-management-level networks and in procurement processes, which are widely considered to be the most harmful kind of misbehavior since it usually leads to enormous amounts of undesired expenditure of public funds. **At <https://www.tricle.org> you can see our first attempt at tackling the deep, essential problem our society is facing about transparency. But the opportunities are far greater! There is an abundance of important pieces of information hidden in plain sight. Deep in government contracts and tenders lie the corrupt world only dig up by relentlessly working journalists. State-of-the-art machine learning technology can help them upscale their effort by parsing large unstructured datasets like texts otherwise inpenetrable to humans due to their sheer size.** Join us in transforming the world, to ensure a brighter tomorrow!
winning
## Inspiration It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in. ## What it does Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site. ## How we built it We built it using: * AWS + DynamoDB + Lambda + Cognito + APIGateway + Amplify * React + Redux + React-Dom + MaterialUI * serverless * Twilio * Domain.com * Netlify ## Challenges we ran into Centring divs should not be so difficult :( Transferring the name servers from domain.com to Netlify Serverless deploying with dependencies ## Accomplishments that we're proud of Our logo! It works :) ## What we learned We learned how to host a domain and we improved our front-end html/css skills ## What's next for You're Doing Great We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message.
## Inspiration As post secondary students, our mental health is directly affected. Constantly being overwhelmed with large amounts of work causes us to stress over these large loads, in turn resulting in our efforts and productivity to also decrease. A common occurrence we as students continuously endure is this notion that there is a relationship and cycle between mental health and productivity; when we are unproductive, it results in us stressing, which further results in unproductivity. ## What it does Moodivity is a web application that improves productivity for users while guiding users to be more in tune with their mental health, as well as aware of their own mental well-being. Users can create a profile, setting daily goals for themselves, and different activities linked to the work they will be doing. They can then start their daily work, timing themselves as they do so. Once they are finished for the day, they are prompted to record an audio log to reflect on the work done in the day. These logs are transcribed and analyzed using powerful Machine Learning models, and saved to the database so that users can reflect later on days they did better, or worse, and how their sentiment reflected that. ## How we built it ***Backend and Frontend connected through REST API*** **Frontend** * React + UI framework the application was written in * JavaScript + Language the frontend was written in * Redux + Library used for state management in React * Redux-Sagas + Library used for asynchronous requests and complex state management **Backend** * Django + Backend framework the application was written in * Python + Language the backend was written in * Django Rest Framework + built in library to connect backend to frontend * Google Cloud API + Speech To Text API for audio transcription + NLP Sentiment Analysis for mood analysis of transcription + Google Cloud Storage to store audio files recorded by users **Database** * PostgreSQL + used for data storage of Users, Logs, Profiles, etc. ## Challenges we ran into Creating a full-stack application from the ground up was a huge challenge. In fact, we were almost unable to accomplish this. Luckily, with lots of motivation and some mentorship, we are comfortable with naming our application *full-stack*. Additionally, many of our issues were niche and didn't have much documentation. For example, we spent a lot of time on figuring out how to send audio through HTTP requests and manipulating the request to be interpreted by Google-Cloud's APIs. ## Accomplishments that we're proud of Many of our team members are unfamiliar with Django let alone Python. Being able to interact with the Google-Cloud APIs is an amazing accomplishment considering where we started from. ## What we learned * How to integrate Google-Cloud's API into a full-stack application. * Sending audio files over HTTP and interpreting them in Python. * Using NLP to analyze text * Transcribing audio through powerful Machine Learning Models ## What's next for Moodivity The Moodivity team really wanted to implement visual statistics like graphs and calendars to really drive home visual trends between productivity and mental health. In a distant future, we would love to add a mobile app to make our tool more easily accessible for day to day use. Furthermore, the idea of email push notifications can make being productive and tracking mental health even easier.
## Inspiration As humans, it is impossible to always be in perfect shape, health, and condition. This includes our mental health and wellbeing. To tackle this problem, we created a website that encourages users to complete simple tasks that have been proven to improve mood, drive, and wellbeing. These tasks include journaling, meditating and reflecting. ## What it does It is a web-based platform with three main functions: 1. Generates reflective reminders and actionable suggestions 2. Provides a platform for journalling gratefulness and aspirations 3. Delivers sounds and visuals for peaceful meditation ## How we built it We used HTML, CSS and vanilla Javascript to build the entire platform. We also used the following libraries: * Wired-elements: <https://github.com/rough-stuff/wired-elements> * Flaticons: <https://www.flaticon.com/> * Button Hover: <https://codepen.io/davidicus/pen/emgQKJ> ## Challenges we ran into Communication between the backend and frontend could've been better. Often times, the backend sent code to the frontend with very little details which caused some confusion. There were also some issues with the size of the MP4 file during the upload to GitHub. ## Accomplishments that we're proud of We are proud to have finished the entire website way ahead of schedule with very minimal errors and problems. We are also proud of the user interface. ## What we learned * Backend to frontend integration * Github Pages * VScode Live Share * Software Architecture ## What's next for First Step * Develop mobile version * Deploy to web * Improve responsive design
partial
## Inspiration While we were coming up with ideas on what to make, we looked around at each other while sitting in the room and realized that our postures weren't that great. You We knew that it was pretty unhealthy for us to be seated like this for prolonged periods. This inspired us to create a program that could help remind us when our posture is terrible and needs to be adjusted. ## What it does Our program uses computer vision to analyze our position in front of the camera. Sit Up! takes your position at a specific frame and measures different distances and angles between critical points such as your shoulders, nose, and ears. From there, the program throws all these measurements together into mathematical equations. The program compares the results to a database of thousands of positions to see if yours is good. ## How we built it We built it using Flask, Javascript, Tensorflow, Sklearn. ## Challenges we ran into The biggest challenge we faced, is how inefficient and slow it is for us to actually do this. Initially our plan was to use Django for an API that gives us the necessary information but it was slower than anything we’ve seen before, that is when we came up with client side rendering. Doing everything in flask, made this project 10x faster and much more efficient. ## Accomplishments that we're proud of Implementing client side rendering for an ML model Getting out of our comfort zone by using flask Having nearly perfect accuracy with our model Being able to pivot our tech stack and be so versatile ## What we learned We learned a lot about flask We learned a lot about the basis of ANN We learned more on how to implement computer vision for a use case ## What's next for Sit Up! Implement a phone app Calculate the accuracy of our model Enlarge our data set Support higher frame rates
# Pose-Bot ### Inspiration ⚡ **In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot** ### What it does 🤖 We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen. We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application. We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀. ### How We built it 💡 1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications. 2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone. 3. For training our model we used our own image as the train data and tested it in different settings. 4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose. 5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen. ### Challenges we ran into 🧠 * Creating a model with good acccuracy in a general setting. * Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval. * Integrating the model into our website. * Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes. * Deploying the application. ### Accomplishments that we are proud of 😌 We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by any user! ### What we learned 🤩 We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API. We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could actually benefit student and help them understand the severity of the cause. ### What's next for Pose-Bot 📈 #### ➡ Creating a chrome extension So that the user can use the functionality on their web browser. #### ➡ Improve the pose detection model. The accuracy of the pose detection model can be increased in the future. #### ➡ Create more classes to help students more concentrate. Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate. ### Help File 💻 * Clone the repository to your local directory * `git clone https://github.com/cryptus-neoxys/posture.git` * `npm i -g live-server` * Install live server to run it locally * `live-server .` * Go to project directory and launch the website using live-server * Voilla the site is up and running on your PC. * Ctrl + C to stop the live-server!! ### Built With ⚙ * HTML * CSS * Javascript + Tensorflow.js + Web Browser API * Google API * EchoAR * Google Poly * Deployed on Vercel ### Try it out 👇🏽 * 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/) * 🕸 [The Website](https://pose-bot.vercel.app/) * 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1) ### 3️⃣ Cheers to the team 🥂 * [Apurva Sharma](https://github.com/Apurva-tech) * [Aniket Singh Rawat](https://github.com/dikwickley) * [Dev Sharma](https://github.com/cryptus-neoxys)
## Inspiration As university students, we spend many hours hunched over using our laptops or writing homework. We are always reminded by our parents that bad posture will lead to years of misery and back pain. However, they’re not always there to remind us to sit up properly. So, we developed PostSURE! ## What it does PostSURE analyzes your posture while you are using your computer. By comparing it to when you have good posture, it will recognize and alert you when you are slouching. There are settings to change its sensitivity and how often it checks. ## How we built it We used python tkinter to build the front end desktop application (interesting choice…). The backend is built with python, openCV, and pytorch to identify the user’s body. We use CockroachDB to store user data. ## Challenges we ran into That nobody knew anything about tkinter but we still decided to learn it and use it. In hindsight, we are not sure why. ## Accomplishments that we're proud of That we finished a hackathon project in the time limit!!! yippee!!!!!! ## What we learned We learned many new technologies like CockroachDB and OpenCV. ## What's next for PostSURE Using the data stored in the database to create a working analytics page that shows users’ posture trends and maybe compare it with other users.
winning
## Inspiration More than **2 million** people in the United States are affected by diseases such as ALS, brain or spinal cord injuries, cerebral palsy, muscular dystrophy, multiple sclerosis, and numerous other diseases that impair muscle control. Many of these people are confined to their wheelchairs, some may be lucky enough to be able to control their movement using a joystick. However, there are still many who cannot use a joystick, eye tracking systems, or head movement-based systems. Therefore, a brain-controlled wheelchair can solve this issue and provide freedom of movement for individuals with physical disabilities. ## What it does BrainChair is a neurally controlled headpiece that can control the movement of a motorized wheelchair. There is no using the attached joystick, just simply think of the wheelchair movement and the wheelchair does the rest! ## How we built it The brain-controlled wheelchair allows the user to control a wheelchair solely using an OpenBCI headset. The headset is an Electroencephalography (EEG) device that allows us to read brain signal data that comes from neurons firing in our brain. When we think of specific movements we would like to do, those specific neurons in our brain will fire. We can collect this EEG data through the Brainflow API in Python, which easily allows us to stream, filter, preprocess the data, and then finally pass it into a classifier. The control signal from the classifier is sent through WiFi to a Raspberry Pi which controls the movement of the wheelchair. In our case, since we didn’t have a motorized wheelchair on hand, we used an RC car as a replacement. We simply hacked together some transistors onto the remote which connects to the Raspberry Pi. ## Challenges we ran into * Obtaining clean data for training the neural net took some time. We needed to apply signal processing methods to obtain the data * Finding the RC car was difficult since most stores didn’t have it and were closed. Since the RC car was cheap, its components had to be adapted in order to place hardware pieces. * Working remotely made designing and working together challenging. Each group member worked on independent sections. ## Accomplishments that we're proud of The most rewarding aspect of the software is that all the components front the OpenBCI headset to the raspberry-pi were effectively communicating with each other ## What we learned One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective disciplines (computer science, mechatronics engineering, mechanical engineering, and electrical engineering). ## What's next for Brainchair To improve BrainChair in future iterations we would like to: Optimize the circuitry to use low power so that the battery lasts months instead of hours. We aim to make the OpenBCI headset not visible by camouflaging it under hair or clothing.
## Inspiration Our biggest inspiration came from our grandparents, who often felt lonely and struggled to find help. Specifically, one of us have a grandpa with dementia. He lives alone and finds it hard to receive help since most of his relatives live far away and he has reduced motor skills. Knowing this, we were determined to create a product -- and a friend -- that would be able to help the elderly with their health while also being fun to be around! Ted makes this dream a reality, transforming lives and promoting better welfare. ## What it does Ted is able to... * be a little cutie pie * chat with speaker, reactive movements based on conversation (waves at you when greeting, idle bobbing) * read heart rate, determine health levels, provide help accordingly * Drives towards a person in need through using the RC car, utilizing object detection and speech recognition * dance to Michael Jackson ## How we built it * popsicle sticks, cardboards, tons of hot glue etc. * sacrifice of my fingers * Play.HT and Claude 3.5 Sonnet * YOLOv8 * AssemblyAI * Selenium * Arudino, servos, and many sound sensors to determine direction of speaker ## Challenges we ran into Some challenges we ran into during development was making sure every part was secure. With limited materials, we found that materials would often shift or move out of place after a few test runs, which was frustrating to keep fixing. However, instead of trying the same techniques again, we persevered by trying new methods of appliances, which eventually led a successful solution! Having 2 speech-to-text models open at the same time showed some issue (and I still didn't fix it yet...). Creating reactive movements was difficult too but achieved it through the use of keywords and a long list of preset moves. ## Accomplishments that we're proud of * Fluid head and arm movements of Ted * Very pretty design on the car, poster board * Very snappy response times with realistic voice ## What we learned * power of friendship * don't be afraid to try new things! ## What's next for Ted * integrating more features to enhance Ted's ability to aid peoples' needs --> ex. ability to measure blood pressure
# Pitch Every time you throw trash in the recycling, you either spoil an entire bin of recyclables, or city workers and multi-million dollar machines separate the trash out for you. We want to create a much more efficient way to sort garbage that also trains people to sort correctly and provides meaningful data on sorting statistics. Our technology uses image recognition to identify the waste and opens the lid of the correct bin. When the image recognizer does not recognize the item, it opens all bins and trusts the user to deposit it. It also records the number of times a lid has been opened to estimate what and how much is in each bin. The statistics would have many applications. Since we display the proportion of all garbage in each bin, it will motivate people to compost and recycle more. It will also allow cities to recognize when a bin is full based on how much it has collected, allowing garbage trucks to optimize their routes. In addition, information about what items are commonly thrown into the trash would be useful to material engineers who can design recyclable versions of those products. Future improvements include improved speed and reliability, IOTA blockchain integration, facial recognition for personalized statistics, and automatic self-learning. # How it works 1. Raspberry Pi uses webcam and opencv to look for objects 2. When an object is detected the pi sends the image to the server 3. Server sends image to cloud image recognition services (Amazon Rekognition & Microsoft Azure) and determines which bin should be open 4. Server stores information and statistics in a database 5. Raspberry Pi gets response back from server and moves appropriate bin
winning
## Inspiration During the Covid-19 pandemic, people are working remotely from home. There are many project management tools that are available and have many features. Sometimes it is tough for people working in small groups or company or business to adapt a tool having lots of features and options. The main feature of these kind of tools is to create tasks, goals and assign them to other employee, manage them, update progress and monitor status of each employee. EZRemote gives the option to do all these things in the most easiest way and also using interaction with the application. ## What it does EZRemote can be used to create employee, organization, goal, task, assign tasks to goal, add goals and employee under each organization, assign tasks to employee, update task progress and even update health status for each employee too. It has an Admin Account which is created by the head of the company or business and an Employee account. Admin account is used to create tasks, goals, organization, add employee and more. Employee account is used by employee to see tasks and update progress. They can also update their health status which is very important at the moment. There is a chat screen which can be accessed by the Admin and can be used to instruct the system to do all the above mentioned activities. Using this application it is very much possible to work remotely with a small or large workforce and accomplish tasks efficiently. The application is really easy to adapt. ## How we built it There is a front end application developed using Vue JS. It is the user interface. Then we have a back end application which is developed using Express.js framework. This server serves as the API server where API is created. MySQL is used for the database. For processing the chat messages from user, Natural Language Processing technique has been used which extracts Nouns and Verbs from chat message and with the help of some defined rules, the user expected output is predicted. Based on that, further actions take place. Nouns and Verbs consist of the key terms that was necessary for the operations. Parts-Of-Speech Tagging (POS Tagging) technique helped in this case. This helped to process the chat messages from user in whichever way they were sent to the user and it did not create problem in predicting the output from the rules. ## Challenges we ran into Due to the shortage of time, many operations were not possible to be done. Rules were not that strongly built for predicting the output of chat messages. It was also taking a lot of time to process text using POS Tagger as the technique itself was tricky and there were also many more options. The big challenge I have faced was selecting the main features and focusing on them as there are many things that can be done in projects like these. ## Accomplishments that we're proud of I wanted to participate in this hackathon with a team but I was a bit late. So without wasting any time I started working on this project and I knew that it will not be possible to complete much in this 36-40 hours. But I am really happy and proud that I could complete a Full Stack Application with front end, back end and database. ## What we learned I have learned a lot from this project. First of all, I gained a huge confidence that I can finish a huge project like this in just 36-40 hours. Secondly, learned a lot about Natural Language Processing techniques and using them in backend using Javascript. I also had to plan the relational database part and it was done nicely for which I was able to make the application functional and perfect. ## What's next for EZRemote For this part, I can write an entire book. But the future task for EZRemote will include voice interaction with the system, more complex search query to retrieve information and statistics about different projects and employee prediction which will tell use from previous records and performance whether an employee will be able to finish task timely or not.
## Inspiration 62% of adults around the world have trouble sleeping according to the Philips Global Sleep Survey. That’s 4.9 billion people. In Canada, 1 in 2 have sleeping troubles, 1 in 5 don’t find their sleep refreshing and 1 in 3 have trouble staying awake according to the Public Health Agency of Canada. With so much data being gathered during our waking hours, you would think we would do the same to improve our sleep, a crucial activity we do for on average 8 hours a day. That’s 56 hours a week, 10 days a month, 120 days a year. Yet, only 8% do according to Statista. So let’s change that! ## What it does Welcome to The Future of Sleep. 💤 Introducing PillowMate, the world’s first smart pillow with built-in 360 surround sound to listen to your favorite music and podcasts, a white noise generator, a smart alarm that gently wakes you up based on your sleep cycle, and monitoring of your sleep, body temperature, heart rate, breathing, snoring, and so much more. Now in terms of our market size, everybody sleeps. We have 7.9 billion potential customers around the world. Suppose we only service the US & Canadian market, that’s still 370 million customers. And suppose we charge $50 per pillow, that’s a serviceable market size of $18.5 billion dollars, while our total addressable market stands at $395 billion dollars. Suppose we only capture 10% of the North American market, that’s a serviceable obtainable market of $185 million dollars. ## How we built it * We used a temperature sensor to detect body temperature, a reed sensor along with a magnet embedded into the pillow to detect tossing and turning, and a microphone to detect snoring. * These were connected to a raspberry pi * A music player served as an actuator along with a display for the data collected. ## Challenges we ran into * The libraries for the FPGA board we were originally using contained some errors and lacked documentation. We also adapted by switching to a Rasberry Pi. * Sounds played by the Arduino speakers played distorted sounds, even after two hours of debugging. We adapted by playing from a speaker connected to the Rasberry Pi. ## Accomplishments that we're proud of * We are most proud of having built a hardware design from end-to-end despite having faced challenges in debugging nearly all the equipment we used. ## What we learned * Are the wires connected? Is the file format on the SD card correct? Does this sensor work? Is there a voltage difference between these points? When working with hardware, debugging is much more difficult as you not only have to ensure your code is working, but also that it meets the specifications required by your hardware and that the hardware itself is attached properly. * Moreover, with hardware it takes much longer to run a program for debugging, both due to the upload time and the requirement of physical input or output. * Therefore, it is even more important to write careful code as debugging tends to consume even more time. ## What's next for PillowMate * In the future, we would like to introduce an accelerometer or fuse several reed sensor for more precise detection of turning. * We would also like to learn how different factors such as temperature should affect white noise being played, so that the sensors have a greater influence on the music we play. * Add 3D 360 surround sound * Build a phone app that syncs automatically with the pillow * Get it in the homes of all UofT students so the student body can collectively sleep better!
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
losing
## Inspiration Dynamic Calendar ## What it does Given a certain time interval (certain day or week) our app will find all the best times for you to schedule a meeting. ## How I built it The project uses bootstrap and vanilla HTML for the front-end and python (flask) for the back-end. In addition, we our utilizing the google calendar api and the google speech to text api. ## Challenges I ran into We wanted the app to be voice-controlled through something like google assistant but many of the programs to do so like Dialogflow and Twilio caused unknown issues that even our mentors could not solve. ## Accomplishments that I'm proud of Despite all odds, our team was still able to buckle down and achieve a MVP ## What I learned I learned many of the ways you can deploy apps to smart-home devices like the google home. ## What's next for timely I would like to see this project properly using one of the previously mentioned services, Dialogflow.
## Inspiration All four of us are university students and have had to study remotely due to the pandemic. Like many others, we have had to adapt to working from home and were inspired to create something to improve WFH life, and more generally life during the pandemic. The pandemic is something that has affected and continues to affect every single one of us, and we believe that it is particularly important to take breaks and look after ourselves. It is possible that many of us will continue working remotely even after the pandemic, and in any case, life just won’t be the same as before. We need to be doing more to look after both our mental and physical health by taking regular breaks, going for walks, stretching, meditating, etc. With everything going on right now, sometimes we even need to be reminded of the simplest things, like taking a drink of water. Enough of the serious talk! Sometimes it’s also important to have a little fun, and not take things too seriously. So we designed our webpage to be super cute, because who doesn’t like cute dinosaurs and bears? And also because, why not? It’s something a little warm n fuzzy that makes us feel good inside, and that’s a good enough reason in and of itself. ## What it does Eventy is a website where users are able to populate empty time slots in their Google Calendar with suitable breaks like taking a drink of water, going on a walk, and doing some meditation. ## How we built it We first divided up the work into (i) backend: research into the Google Calendar API and (ii) frontend: looking into website vs chrome extension and learning HTML. Then, we started working with the Google Calendar API to extract data surrounding the events in the user’s calendar and used this information to identify where breaks could be placed in their schedule. After that, based on the length of the time intervals between consecutive events, we scheduled breaks like drinking water, stretching, or reading. Finally, we coded the homepage of our site and connected the backend to the frontend! ## Challenges we ran into * Deciding on a project that was realistic given our respective levels of experience, given the time constraints and the fact that we did not know each other prior to the Hackathon * Configuring the authorization of a Google account and allowing the app to access Google Calendar data * How to write requests to the API to read/write events + How would we do this in a way that ensures we’re only populating empty spots in their calendar and not overlapping with existing events? * Deciding on a format to host our app in (website vs chrome extension) * Figuring out how to connect the frontend of the app to the backend logic ## What we learned We learned several new technical skills like how to collaborate on a team using Git, how to make calls to an API, and also the basics of HTML and CSS.
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
losing
## Inspiration We noticed that some people are either unable to navigate or move around their spaces easily. We wanted to empower these people by giving them the tools to embody a robot using brain control. ## What it does It allows users to control a robot like spot using their brain signals. ## How we built it **eeg part** * wrote a visual entrainment script on psychopy (using python) * wrote a real-time eeg processing script that takes in continuous voltage data from 8 channels, cleans/filters it with butterworth and independent component analysis, complete fast fourier transform, and wrote an algorithm to classify which frequency had the largest increase in power within the past five seconds. ## Challenges we ran into * visual entrainment script is supposed to have four images flashing at different frequencies but both psychopy and react cannot achieve this. should've tried matlab. * real-time eeg processing is difficult to debug as it requires constant streaming of data and using multiple functions at the same time. ## Accomplishments that we're proud of * debugging nonstop for several hours + finally understanding most of lsl ## What we learned * need to conduct a full literature review for eeg projects in a short amount of time + understood more in depth about real-time processing ## What's next for Insight OS
## Inspiration We are heading to the wireless future where human and machine are united together ## What it does Our program can analysis and record the brain wave and control the the Tello to taking off and landing. ## How we built it First, we find a starter kit for brainwave detection called MindWave Mobile 2. A program on pc with GUI and read the serial information, and perform the data analysis and category. A Tello drone is connected to same PC through the network. and program can send command through WIFI to control the drone directly. ## Challenges we ran into 1. Set up the connection between PC and MindWave devices: there is no very detailed documents about how the communication process being performed. We spent a lot of time to analysis the serial communication and its output format. 2. Set up the comment control for Tello: we also spend some time in this step due to the package online mostly based on the python2, and our program is running by python3, so we have to debug and change some code to make it work. 3. GUI design and Concurrent programming: we spent significant amount of time on debugging the logic of the GUI, also, the program flow to read and write and share data between threads. 4. Detection ## Accomplishments that we're proud of Being about to write software to communicate with software, and being something cool ## What we learned Team Collaboration, Working under stress, Have Fun !!! ## What's next for Mind Tello We are going to improve the control accuracy, and the detection.
## Inspiration The inspiration for this project was a group-wide understanding that trying to scroll through a feed while your hands are dirty or in use is near impossible. We wanted to create a computer program to allow us to scroll through windows without coming into contact with the computer, for eating, chores, or any other time when you do not want to touch your computer. This idea evolved into moving the cursor around the screen and interacting with a computer window hands-free, making boring tasks, such as chores, more interesting and fun. ## What it does HandsFree allows users to control their computer without touching it. By tilting their head, moving their nose, or opening their mouth, the user can control scrolling, clicking, and cursor movement. This allows users to use their device while doing other things with their hands, such as doing chores around the house. Because HandsFree gives users complete **touchless** control, they’re able to scroll through social media, like posts, and do other tasks on their device, even when their hands are full. ## How we built it We used a DLib face feature tracking model to compare some parts of the face with others when the face moves around. To determine whether the user was staring at the screen, we compared the distance from the edge of the left eye and the left edge of the face to the edge of the right eye and the right edge of the face. We noticed that one of the distances was noticeably bigger than the other when the user has a tilted head. Once the distance of one side was larger by a certain amount, the scroll feature was disabled, and the user would get a message saying "not looking at camera." To determine which way and when to scroll the page, we compared the left edge of the face with the face's right edge. When the right edge was significantly higher than the left edge, then the page would scroll up. When the left edge was significantly higher than the right edge, the page would scroll down. If both edges had around the same Y coordinate, the page wouldn't scroll at all. To determine the cursor movement, we tracked the tip of the nose. We created an adjustable bounding box in the center of the users' face (based on the average values of the edges of the face). Whenever the nose left the box, the cursor would move at a constant speed in the nose's position relative to the center. To determine a click, we compared the top lip Y coordinate to the bottom lip Y coordinate. Whenever they moved apart by a certain distance, a click was activated. To reset the program, the user can look away from the camera, so the user can't track a face anymore. This will reset the cursor to the middle of the screen. For the GUI, we used Tkinter module, an interface to the Tk GUI toolkit in python, to generate the application's front-end interface. The tutorial site was built using simple HTML & CSS. ## Challenges we ran into We ran into several problems while working on this project. For example, we had trouble developing a system of judging whether a face has changed enough to move the cursor or scroll through the screen, calibrating the system and movements for different faces, and users not telling whether their faces were balanced. It took a lot of time looking into various mathematical relationships between the different points of someone's face. Next, to handle the calibration, we ran large numbers of tests, using different faces, distances from the screen, and the face's angle to a screen. To counter the last challenge, we added a box feature to the window displaying the user's face to visualize the distance they need to move to move the cursor. We used the calibrating tests to come up with default values for this box, but we made customizable constants so users can set their boxes according to their preferences. Users can also customize the scroll speed and mouse movement speed to their own liking. ## Accomplishments that we're proud of We are proud that we could create a finished product and expand on our idea *more* than what we had originally planned. Additionally, this project worked much better than expected and using it felt like a super power. ## What we learned We learned how to use facial recognition libraries in Python, how they work, and how they’re implemented. For some of us, this was our first experience with OpenCV, so it was interesting to create something new on the spot. Additionally, we learned how to use many new python libraries, and some of us learned about Python class structures. ## What's next for HandsFree The next step is getting this software on mobile. Of course, most users use social media on their phones, so porting this over to Android and iOS is the natural next step. This would reach a much wider audience, and allow for users to use this service across many different devices. Additionally, implementing this technology as a Chrome extension would make HandsFree more widely accessible.
losing
## Why this project? Seva, our main character, has trouble sleeping when the room is too bright, but difficulty waking up if the room is too dark. This is an issue, as lights in cities are very bright, so there is always a trade-off between falling asleep and waking up. If only there was a way to have a light activate a short time before the alarm! ## What it does Surprise, surprise, it activates a light before the actual alarm triggers. We also envision other, more physical methods of alarm for mornings when waking up is particularly difficult. These include a boxing glove hitting your face, and a water jet sprayed in your direction. We also have an app UI to set the alarm from a website. Now Seva will have no trouble with waking up! ## How we built it The hardware side is built using an Arduino Due, and uses a shield board, a bread board, an RTC chip for accurate time keeping, a buzzer, and a relay to control the USB powered, bright LED light. (USB light is connected by a USB extension that was hacked apart.) ## Challenges we ran into On the Arduino side: * Code sometimes refuses to compile, throwing errors on every function (issue is not reproducible) * RTC chip had the battery inserted the wrong way * RTC chip only works when plugged into I^2^C0 and not I^2^C1 On the Taipy side: ## Accomplishments that we're proud of * Basic functionality achieved * Did not die * Created an alarm that is accessible to pretty much all members of the population (design for accessibility!) ## What we learned Besides learning how to operate with Taipy, use an Arduino Due with a breadboard, setting up an RTC, buzzer, relay, and the LED light, as well as finally understanding what female wires are needed for, most importantly we learned the power of friendship! Our character arcs defined the plot of the hackathon. ## What's next for Torture Alarm The next step would be to include the boxing glove and water jet in this project. Although the code for controlling the punching glove was written, improving it by including the punching glove itself would be great (not to encourage violence but to promote waking up, of course).
## Inspiration We like walking at night, but we don't like getting mugged. 💅🤷‍♀️We can't program personal bodyguards (yet) so preventative measures are our best bet! Accessibility and intuitiveness are 🔑in our idea. ## What it does HelpIs allows users to contact services when they feel unsafe. 📱Location trackers integrate tech into emergency services, which tend to lag behind corporations in terms of tech advancements. 🚀🚔🚓 ## How we built it We heavily relied on the use of APIs from Twilio and MapBox to create our project. 💪Specifically, we employed Twilio Programmable Voice for calling 🗣️🗣️, Mapbox GL JS for map display and geolocation 🗺️🗺️, and Mapbox Directions API for route finding and travel time prediction 📍📍. The web app itself is built with [VanillaJS](http://vanilla-js.com/) (:p), HTML, and CSS. 🙆 ## Challenges we ran into Our original design centered on mobile connectivity through a phone app and a wider range of extendable features. Unfortunately, one of our teammates couldn't make it as scheduled 💔, and we had to make some trade-offs 💀💀. Eventually, we pulled through and are thrilled to exceed many goals we set for ourselves at the start of the hack! 💥🔥💯 🚩🚩🚩Also, Alison thought the name was HelpIs (as in like Help is coming!) but Alex (who came up with the name) assumed she could tell it was Helpls (like Help + pls = Helpls). 🐒He even wrote about it in the footnotes of his mockup. You can go read it. Embarrassing... 🚩🚩🚩 ## Accomplishments that we're proud of We are so proud of the concept behind HelpIs that we decided to make our **first ever** pitch! ⚾ Thanks Alex for redoing it like four times! 💔😅 ## What we learned Differentiating our program from existing services is important in product ideation! 🤑 Also, putting emojis into our about page is super funny 🦗🦟🦗🦟🦗🦟🦗🦟 ## What's next for HelpIs We want to make it mobile! There are a few more features that we had planned for the mobile version. Specifically, we were looking to use speech-to-text and custom "code words" to let a user contact authorities without alerting those around them, inspired by the famous [911 call disguised as pizza order](https://www.youtube.com/watch?v=ZJL_8kNFmTI)! 🤩🤩🤩
## Inspiration We were inspired to build Schmart after researching pain points within Grocery Shopping. We realized how difficult it is to stick your health goals or have a reduced environmental impact while grocery shopping. Inspired by innovative technlogy that exists, we wanted to create an app which would conveniently allow anyone to feel empowered to shop by reaching their goals and reducing friction. ## What it does Our solution, to gamify the grocery shopping experience by allowing the user to set goals before shopping and scan products in real time using AR and AI to find products that would meet their goals and earn badges and rewards (PC optimum points) by doing so. ## How we built it This product was designed on Figma, and we built the backend using Flask and Python, with the database stored using SQLite3. We then built the front end with React Native. ## Challenges we ran into Some team members had school deadlines during the hackathon, so we could not be fully concentrated on the Hackathon coding. In addition, our team was not too familiar with React Native, so development of the front end took longer than expected. ## Accomplishments that we're proud of We are extremely proud that we were able to build an deployed an end-to-end product in such a short timeframe. We are happy to empower people while shopping and make the experience so much more enjoyable and problem solve areas that exist while shopping. ## What we learned Communication is key. This project would not have been possible without the relentless work of all our team members striving to make the world a better place with our product. Whether it be using technology we have never used before or sharing our knowledge with the rest of the group, we all wanted to create a product that would have a positive impact and because of this we were successful in creating our product. ## What's next for Schmart We hope everyone can use Schmart in the future on their phones as a mobile app. We can see it being used in Grocery (and hopefully all stores) in the future. Leaders. Meeting health and environmental goals should be barrier-free, and being an app that anyone can use, this makes this possible.
partial
## Inspiration Energy is the foundation for everyday living. Productivity from the workplace to lifestyle—sleep, nutrition, fitness, social interactions—are dependant on sufficient energy levels required for each activity [1]. Various generalized interventions have been proposed to address energy levels, but currently no method has proposed a personal approach using daily schedules/habits as determinants for energy. ## What it does Boost AI is an iOS application that uses machine learning to predict energy levels based on daily habits. Simple and user-specific questions on sleep schedule, diet, fitness, social interaction, and current energy level will be used as determinants to predict future energy level. Notifications will give the user personalized recommendations to increase energy throughout the day. Boost AI allows you to visualize your energy trends over time, including predictions for personalized intervention based on your own lifestyle. ## How we built it We used MATLAB and TensorFlow for our machine learning framework. The current backend utilizes a support vector machine that is trained on simulated data, based on a subject's "typical" week, with relevant data-augmentation. The linear support vector machine is continually trained with each new user input, and each prediction is based on a moving window, as well as historical daily trends. We have further trained an artificial neural network to make these same predictions, using tensorflow with a keras wrapper. In the future this neural network model will be used to allow for an individual to get accurate predictions with their first use by applying a network trained on a large and diverse set of individuals, then continually fine tuning their personal network to have the best predictions and accurate trends for them. We used Sketch to visualize our iOS application prototype. ## Challenges we ran into Although we come from the healthcare field, we were limited in domain knowledge in human energy and productivity. We did research on each parameter that is determinant to energy levels. ## Accomplishments that we're proud of Boost AI is strongly translatable to improving energy in everyday life. We’re proud of the difference it can make to the every day lives of our users. ## What's next for Boost AI We aim to improve our prototype by training our framework with a real world dataset. We would like to explore two main applications: **1) Workspace.** Boost AI can be optimized in the workplace by implementing the application into workspace specific softwares. We predict that Boost AI will "boost" energy with specific individual interventions for improved productivity and output. **2) Healthcare.** Boost AI can be use health based data such as biometric markers and researched questionnaires to predict energy. The data and trends can be used for clinical-driven, intervention and improvements, as well as personal use. ## References: [1] Arnetz, BB., Broadbridge, CL., Ghosh, S. (2014) Longitudinal determinants of energy levels in knowledge workers. Journal of Occupational Environmental Medicine.
## Inspiration The precise and confusing way in which network admin data must currently be found. The whole point of the project is to decrease the unnecessary level of burden for a given person to simply access and make sense of data. ## What it does We made a Spark/Slack/Cortana/Facebook Messenger/Google Assistant chat/voice bot that allows people to get data insights from Meraki networking gear by just typing/talking in a natural manner via Natural Language Processing. We also use Kafka/Confluent to upload chat messages to AWS S3 and analyze them and improve our NLP system. Additionally, we use advanced yet intuitive 2D and 3D modeling software to make it easy for users to understand the data they receive. ## How we built it Node.js chat bot on Heroku + custom analytic servers on Heroku along with a local Java data processing server and online S3 bucket. ## Challenges we ran into Getting Kafka set up and working as well as understanding different networking features. ## Accomplishments that we're proud of Overcoming our challenges. ## What we learned We learned a lot about networking terminology as well as how data processing/streaming works. ## What's next for EZNet More features to be available via the cross-platform bot!
## Inspiration 92% of Americans don’t undergo routine health screenings. In fact, missed preventative opportunities cost US healthcare 55 billion dollars every year. We wanted to create a proactive solution that uses the data we already generate through devices like iPhones and Apple Watches. Our goal was to provide people with daily insights into their health, helping them make informed decisions about whether to visit a doctor or change their habits before more serious issues arise. ## What it does iHealth.ai automatically collects daily health data, such as heart rate, sleep analysis, walking distance, and headphone audio exposure, from Apple devices. It then processes this data and feeds it into a powerful language model (LLM). The LLM analyzes patterns and provides actionable insights, recommending whether you should seek medical advice or take steps to improve your health. ## How we built it We used **Propel Auth0** to authenticate users securely and **Vercel** to host our web application. \*\* Apple HealthKit\*\* APIs were used to gather health data from users' iPhones and Apple Watches, such as heart rate, sleep duration, walking distance, and audio exposure. This data is stored securely in \*\* MongoDB\**. We then used \*\*OpenAI*\* 's API GPT-4o to analyze the data, providing users with feedback based on their health trends. ## Challenges we ran into One of the biggest challenges we encountered was implementing **Propel Auth0** for user authentication. Integrating it across platforms was far more complex than expected, particularly when trying to synchronize user sessions and securely manage health data from Apple devices. Initially, we planned to use the **Cerebras AI model** for health analysis, but soon realized it was too computationally demanding to run on our laptops, requiring us to pivot to **OpenAI's GPT model**. Additionally, we faced challenges with parsing and cleaning health data from Apple’s XML files. The raw format required custom parsers to extract and organize data, such as sleep, heart rate, and walking metrics, so that it could be fed into the AI model effectively. While time-consuming and technically demanding, this process was crucial to ensuring accurate and reliable health assessments. ## Accomplishments that we're proud of We’re proud of the seamless integration of multiple technologies: Apple HealthKit, MongoDB, Flask, Propel Auth0, and OpenAI’s API. Our team worked hard to ensure that the daily health updates are user-friendly, and the insights are actionable. We’ve made healthcare more accessible by giving people insights without having to schedule constant check-ups. ## What we learned We learned a lot about the complexity of health data and how to present it in a way that is both accessible and meaningful to users. We also deepened our understanding of integrating machine learning models with real-world data and handling large datasets in a secure, scalable way. ## What's next for iHealth.ai We plan to team up with actual doctors to fine-tune our model and provide better, more medically tailored recommendations. By collaborating with medical professionals, we hope to make iHealth.ai more accurate and helpful, using a specialized GPT model to assess patient health with greater precision. Our ultimate goal is to create a personalized health advisor that everyone can access, no matter where they are or their financial situation.
partial
# 🎓 **Inspiration** Entering our **junior year**, we realized we were unprepared for **college applications**. Over the last couple of weeks, we scrambled to find professors to work with to possibly land a research internship. There was one big problem though: **we had no idea which professors we wanted to contact**. This naturally led us to our newest product, **"ScholarFlow"**. With our website, we assure you that finding professors and research papers that interest you will feel **effortless**, like **flowing down a stream**. 🌊 # 💡 **What it Does** Similar to the popular dating app **Tinder**, we provide you with **hundreds of research articles** and papers, and you choose whether to approve or discard them by **swiping right or left**. Our **recommendation system** will then provide you with what we think might interest you. Additionally, you can talk to our chatbot, **"Scholar Chat"** 🤖. This chatbot allows you to ask specific questions like, "What are some **Machine Learning** papers?". Both the recommendation system and chatbot will provide you with **links, names, colleges, and descriptions**, giving you all the information you need to find your next internship and accelerate your career 🚀. # 🛠️ **How We Built It** While half of our team worked on **REST API endpoints** and **front-end development**, the rest worked on **scraping Google Scholar** for data on published papers. The website was built using **HTML/CSS/JS** with the **Bulma** CSS framework. We used **Flask** to create API endpoints for JSON-based communication between the server and the front end. To process the data, we used **sentence-transformers from HuggingFace** to vectorize everything. Afterward, we performed **calculations on the vectors** to find the optimal vector for the highest accuracy in recommendations. **MongoDB Vector Search** was key to retrieving documents at lightning speed, which helped provide context to the **Cerebras Llama3 LLM** 🧠. The query is summarized, keywords are extracted, and top-k similar documents are retrieved from the vector database. We then combined context with some **prompt engineering** to create a seamless and **human-like interaction** with the LLM. # 🚧 **Challenges We Ran Into** The biggest challenge we faced was gathering data from **Google Scholar** due to their servers blocking requests from automated bots 🤖⛔. It took several hours of debugging and thinking to obtain a large enough dataset. Another challenge was collaboration – **LiveShare from Visual Studio Code** would frequently disconnect, making teamwork difficult. Many tasks were dependent on one another, so we often had to wait for one person to finish before another could begin. However, we overcame these obstacles and created something we're **truly proud of**! 💪 # 🏆 **Accomplishments That We're Proud Of** We’re most proud of the **chatbot**, both in its front and backend implementations. What amazed us the most was how **accurately** the **Llama3** model understood the context and delivered relevant answers. We could even ask follow-up questions and receive **blazing-fast responses**, thanks to **Cerebras** 🏅. # 📚 **What We Learned** The most important lesson was learning how to **work together as a team**. Despite the challenges, we **pushed each other to the limit** to reach our goal and finish the project. On the technical side, we learned how to use **Bulma** and **Vector Search** from MongoDB. But the most valuable lesson was using **Cerebras** – the speed and accuracy were simply incredible! **Cerebras is the future of LLMs**, and we can't wait to use it in future projects. 🚀 # 🔮 **What's Next for ScholarFlow** Currently, our data is **limited**. In the future, we’re excited to **expand our dataset by collaborating with Google Scholar** to gain even more information for our platform. Additionally, we have plans to develop an **iOS app** 📱 so people can discover new professors on the go!
## Inspiration With the Berkeley enrollment time just around the corner, everyone is stressed about what classes to take. Recently, we had a conversation with one of our friends who was especially stressed about taking CS 162 next semester, with her main concern being that the course has so much content and it will be hard for her to process and digest all the information before midterms. We got the idea to create SecondSearch, where her and all other students in any class can quickly and efficiently review class material by searching through lectures directly. ## What it does SecondSearch answers any question about a course with a direct link to the lecture which explains the question. It performs a vector similarity search to determine which portion of lecture is most likely to answer your question and then displays that video. ## How we built it We built SecondSearch on the Milvus open-source vector database, using OpenAI to help with the search, then completed the product with a companion React frontend built with Chakra UI component library. We implemented the backend using FastAPI and populated the Milvus docker containers with Jupyter Notebook. ## Challenges we ran into We had trouble setting up Milvus and Docker at first, but were quickly able to find thorough documentation for the setup process. Working with React and frontend in general for the first time, we took a couple hours ramping up. It was smooth sailing after the difficult ramp up process :) ## Accomplishments that we're proud of We're proud of getting a full stack product working in the short span of the hackathon: the client, server, and Milvus docker instance. ## What we learned We learned how to use Docker, FastAPI, and React, as well as the basics (struggles) of full stack development. ## What's next for SecondSearch After creating the minimum viable product, we wanted to make the UI more friendly by using OpenAI to summarize the caption display from the video segments. However, we quickly realized that adding this change would slow the search time down from its current ~1 second to ~20 seconds. As we ran out of time to speed up this feature, we decided to temporarily remove it. However, we will be reimplementing it more efficiently as soon as possible. As for the big picture and the more distant future, currently our product works with lecture series uploaded to Youtube - we want to expand to lecture videos uploaded to other platforms, as some Berkeley classes upload recordings to bCourses, and other institutions use different platforms. After we expand the project further, some reaching goals for the far future include advertising the completed product to all university students, as lectures are often recorded and uploaded in some form. We also want to add new features on future patches such as saving previous searches, and more.
## Inspiration Being surrounded by many high - school and university students, along with two of ourselves applying to universities this September, we wanted to create an all in one tool to help students navigate the complex world of university applications. We wanted to create a one stop place where students can improve their university applications and gain access to scholarships to benefit them on their future journey. We recalled how over 20 million dollars worth of scholarship money goes unclaimed in Canada each year and given many struggle with the weight of tuition one of our key functionalities is a sponsorship database ensuring that students are able to get all the money they can to support their future. In short, applying to university is not an easy task, quite the opposite and with so many resources spread out sparsely, we wanted to create an all in one tool to empower students with their journey towards the future. ## What it does Our platform offers personalized advice, step-by-step application guidance, scholarship matching, and tips for acing interviews and essays. Whether you're aiming for top universities or seeking financial aid, Scholar Rank is here to make your journey smoother, helping you achieve your academic dreams with confidence. Firstly, our scholarship database search allows you to find scholarships that are specifically relevant to you, your aspirations and characteristics that best allows you to claim them. With over 500 scholarships you will find a plethora of awards that you can apply to in order to secure your future. With our essay helper we provide in - depth feedback to your short - answers and essays to ensure you communicate yourself in the best way possible. Powered with AI our platform ensures you create the strongest essay possible. Finally, with our tuition calculator and stats predictor you can calculate the tuition needed for your desired program to begin your budgeting plans and see your chances of admission at different programs to enable you to create a comprehensive plan to achieving a successful future for you. ## How we built it **Backend - Scholarships:** We utilized python and Flask for the backend development of our various applications. For our sponsorship database we compiled a spreadsheet with 500+ scholarships. The user would enter information about themselves on the frontend and through a Post request that information would be sent to the flask server as a JSON file. The server captures the JSON file and retrieves the users input information. We then use the Cohere API to rank the scholarships based on the users query and parameters. We then extract the top 12 of these recommendations and return that back to the front end to be displayed. This is our favorite feature! **Backend - Essay Helper:** We trained a custom chat GPT model to be able to respond to user inputs, and provide detailed tips on how to improve. This was a very tedious task, requiring us to communicate between a flask server to the chat GPT API to the react frontend. **Backend - Stats:** This was by far the most annoying feature to get to work. We had to download a ton of university statistics as excel files, convert those to csv files, and process them into our application. **Backend - Tuition:** We again utilized Flask and Python to use the user input information about their tuition and university to create a post request that is sent to the flask server. We then compare the parameters of the users input with our compiled dataset of tuition costs for various programs from various universities and output an approximate tuition cost. **Frontend - Landing Page** I stood up all night on Friday night making this webpage. (Mostly because I wasn't satisfied with the result!) I feel like every website these days is a carbon copy of the same cooperate template. It was a lot of fun to experiment with a more brutalist design to create a super eye-catching landing page. We used react, but we did not do any smooth animations as we were going for a raw feeling. **Frontend - Dashboard** We wanted to maintain the same minimalistic feeling as the landing page. I really like how "bare bones" it looks but it's still a complex react object. ## Challenges we ran into This was all of our first time using flask for the backend processing so there was a lot we were unfamiliar with and a lot of experimenting we had to do to get familiar with the framework. Specifically, we found it challenging to send specific user inputs or actions into the server including checkboxes and drop down menus. We also faced challenges in the tuition page in trying to compare the existing dataset with the user input but we were able to solve these problems through innovative design choices. Finally, our largest challenge was definitely hosting our webpage. Since we created a dynamic webpage that would make requests to the flask server we could not use a static host such as GitHub pages which we were not expecting. We had to scour the web for a site that would allow us to host our site and even then we faced many challenges in running the scripts to host the server. Eventually, after a lot and a lot of debugging we were able to successfully host the site on the Vercell server. ## Accomplishments that we're proud of As this is our first time working with the ChatGPT API and Cohere Rerank, we are very proud of the results. The final product exceeds our expectations as the Cohere Rerank worked way better than we expected. The scholarship ranking was incredibly consistent and relevant to the information provided by the user. Finally, the UI is very clean and polished and it was very cool to experiment with different website styles. ## What we learned Every member of our team tried something new for the first time. We learned and implemented Flask, OpenAI API, Cohere Rerank, React, Webscraping, and Database Management. While we all had a basic understanding of these technologies, this project allowed us to explore even further, and most importantly, connect everything into one big project. ## What's next for Pineapple Pathways As technology improves, we would like to use the newest AI models to improve Pineapple Pathways' abilities. We would also like to expand our scholarship dataset using web scraping and databasing tools. Additionally, we'd like to improve the university application section of the website by providing stats about more universities. To do this, we would need to collect more accurate data about various universities. Finally we would like to purchase a domain and host our site on a reliable server. Currently we host it on a free plan server and so when other people try to access it, it can be unreliable and since we want it to be accessible to the world not just as a local host we are planning on purchasing a dedicated domain to host our site effectively.
partial
## Inspiration Herpes Simplex Virus-2 (HSV-2) is the cause of Genital Herpes, a lifelong and contagious disease characterized by recurring painful and fluid-filled sores. Transmission occurs through contact with fluids from the sores of the infected person during oral, anal, and vaginal sex; transmission can occur in asymptomatic carriers. HSV-2 is a global public health issue with an estimated 400 million people infected worldwide and 20 million new cases annually - 1/3 of which take place Africa (2012). HSV-2 will increase the risk of acquiring HIV by 3 fold, profoundly affect the psychological well being of the individual, and pose as a devastating neonatal complication. The social ramifications of HSV-2 are enormous. The social stigma of sexual transmitted diseases (STDs) and the taboo of confiding others means that patients are often left on their own, to the detriment of their sexual partners. In Africa, the lack of healthcare professionals further exacerbates this problem. Further, the 2:1 ratio of female to male patients is reflective of the gender inequality where women are ill-informed and unaware of their partners' condition or their own. Most importantly, the symptoms of HSV-2 are often similar to various other dermatological issues which are less severe, such as common candida infections and inflammatory eczema. It's very easy to dismiss Genital Herpes as these latter conditions which are much less severe and non-contagious. ## What it does Our team from Johns Hopkins has developed the humanitarian solution “Foresight” to tackle the taboo issue of STDs. Offered free of charge, Foresight is a cloud-based identification system which will allow a patient to take a picture of a suspicious skin lesion with a smartphone and to diagnose the condition directly in the iOS app. We have trained the computer vision and machine-learning algorithm, which is downloaded from the cloud, to differentiate between Genital Herpes and the less serious eczema and candida infections. We have a few main goals: 1. Remove the taboo involved in treating STDs by empowering individuals to make diagnostics independently through our computer vision and machine learning algorithm. 2. Alleviate specialist shortages 3. Prevent misdiagnosis and to inform patients to seek care if necessary 4. Location service allows for snapshots of local communities and enables more potent public health intervention 5. Protects the sexual relationship between couples by allowing for transparency- diagnose your partner! ## How I built it We first gathered 90 different images of 3 categories (30 each) of skin conditions that are common around the genital area: "HSV-2", "Eczema", and "Yeast Infections". We realized that a good way to differentiate between these different conditions are the inherent differences in texture, which are although subtle to the human eye, very perceptible via good algorithms. ] We take advantage of the Bag of Words model common in the field of Web Crawling and Information Retrieval, and apply a similar algorithm, which is written from scratch except for the feature identifier (SIFT). The algorithm follows: Part A) Training the Computer Vision and Machine Learning Algorithm (Python) 1. We use a Computer Vision feature identifying algorithm called SIFT to process each image and to identify "interesting" points like corners and other patches that are highly unique 2. We consider each patch around the "interesting" points as textons, or units of characteristic textures 3. We build a vocabulary of textons by identifying the SIFT points in all of our training images, and use the machine learning algorithm k-means clustering to narrow down to a list of 1000 "representative" textons 4. For each training image, we can build our own version of a descriptor by representation of a vector, where each element of the vector is the normalized frequency of the texton. We further use tf-idf (term frequency, inverse document frequency) optimization to improve the representation capabilities of each vector. (all this is manually programmed) 5. Finally, we save these vectors in memory. When we want to determine whether a test image depicts either of the 3 categories, we encode the test image into the same tf-idf vector representation, and apply k-nearest neighbors search to find the optimal class. We have found through experimentation that k=4 works well as a trade-off between accuracy and speed. 6. We tested this model with a randomly selected subset that is 10% the size of our training set and achieved 89% accuracy of prediction! Part B) Ruby on Rails Backend 1. The previous machine learning model can be expressed as an aggregate of 3 files: cluster centers in SIFT space, tf-idf statistics, and classified training vectors in cluster space 2. We output the machine learning model as csv files from python, and write an injector in Ruby that inserts the trained model into our PostgreSQL database on the backend 3. We expose the API such that our mobile iOS app can download our trained model directly through an HTTPS endpoint. 4. Beyond storage of our machine learning model, our backend also includes a set of API endpoints catered to public health purposes: each time an individual on the iOS app make a diagnosis, the backend is updated to reflect the demographic information and diagnosis results of the individual's actions. This information is visible on our web frontend. Part C) iOS app 1. The app takes in demographic information from the user and downloads a copy of the trained machine learning model from our RoR backend once 2. Once the model has been downloaded, it is possible to make diagnosis even without internet access 3. The user can take an image directly or upload one from the phone library for diagnosis, and a diagnosis is given in several seconds 4. When the diagnosis is given, the demographic and diagnostic information is uploaded to the backend Part D) Web Frontend 1. Our frontend leverages the stored community data (pooled from diagnoses made from individual phones) accessible via our backend API 2. The actual web interface is a portal for public health professionals like epidemiologists to understand the STD trends (as pertaining to our 3 categories) in a certain area. The heat map is live. 3. Used HTML5,CSS3,JavaScript,jQuery ## Challenges I ran into It is hard to find current STD prevalence incidence data report outside the United States. Most of the countries have limited surveilliance data among African countries, and the conditions are even worse among stigmatized diseases. We collected the global HSV-2 prevalence and incidence report from World Health Organization(WHO) in 2012. Another issue we faced is the ethical issue in collecting disease status from the users. We were also conflicted on whether we should inform the user's spouse on their end result. It is a ethical dilemma between patient confidentiality and beneficence. ## Accomplishments that I'm proud of 1. We successfully built a cloud-based picture recognition system to distinguish the differences between HSV-2, yeast infection and eczema skin lesion by machine learning algorithm, and the accuracy is 89% for a randomly selected test set that is 10% the training size. 2. Our mobile app which provide users to anonymously send their pictures to our cloud database for recognition, avoid the stigmatization of STDs from the neighbors. 3. As a public health aspect, the function of the demographic distribution of STDs in Africa could assist the prevention of HSV-2 infection and providing more medical advice to the eligible patients. ## What I learned We learned much more about HSV-2 on the ground and the ramifications on society. We also learned about ML, computer vision, and other technological solutions available for STD image processing. ## What's next for Foresight Extrapolating our workflow for Machine Learning and Computer Vision to other diseases, and expanding our reach to other developing countries.
## Inspiration The inspiration behind LeafHack stems from a shared passion for sustainability and a desire to empower individuals to take control of their food sources. Witnessing the rising grocery costs and the environmental impact of conventional agriculture, we were motivated to create a solution that not only addresses these issues but also lowers the barriers to home gardening, making it accessible to everyone. ## What it does Our team introduces "LeafHack" an application that leverages computer vision to detect the health of vegetables and plants. The application provides real-time feedback on plant health, allowing homeowners to intervene promptly and nurture a thriving garden. Additionally, the images uploaded can be stored within a database custom to the user. Beyond disease detection, LeafHack is designed to be a user-friendly companion, offering personalized tips and fostering a community of like-minded individuals passionate about sustainable living ## How we built it LeafHack was built using a combination of cutting-edge technologies. The core of our solution lies in the custom computer vision algorithm, ResNet9, that analyzes images of plants to identify diseases accurately. We utilized machine learning to train the model on an extensive dataset of plant diseases, ensuring robust and reliable detection. The database and backend were built using Django and Sqlite. The user interface was developed with a focus on simplicity and accessibility, utilizing next.js, making it easy for users with varying levels of gardening expertise ## Challenges we ran into We encountered several challenges that tested our skills and determination. Fine-tuning the machine learning model to achieve high accuracy in disease detection posed a significant hurdle as there was a huge time constraint. Additionally, integrating the backend and front end required careful consideration. The image upload was a major hurdle as there were multiple issues with downloading and opening the image to predict with. Overcoming these challenges involved collaboration, creative problem-solving, and continuous iteration to refine our solution. ## Accomplishments that we're proud of We are proud to have created a solution that not only addresses the immediate concerns of rising grocery costs and environmental impact but also significantly reduces the barriers to home gardening. Achieving a high level of accuracy in disease detection, creating an intuitive user interface, and fostering a sense of community around sustainable living are accomplishments that resonate deeply with our mission. ## What we learned Throughout the development of LeafHack, we learned the importance of interdisciplinary collaboration. Bringing together our skills, we learned and expanded our knowledge in computer vision, machine learning, and user experience design to create a holistic solution. We also gained insights into the challenges individuals face when starting their gardens, shaping our approach towards inclusivity and education in the gardening process. ## What's next for LeafHack We plan to expand LeafHack's capabilities by incorporating more plant species and diseases into our database. Collaborating with agricultural experts and organizations, we aim to enhance the application's recommendations for personalized gardening care.
## Inspiration By 2050,16% of the the Global Population will be the elderly. Around 1.5 billlion people will be above the age of 65. Professionals will not be able to cope up with this increased demand for quality healthcare. Many elders don't get in-time treatment, and emergency is always a fear for the children. Artificial Intelligence is the solution. ## What it does * Diagnose disease * Offer medicine recommendations * Send daily reports * Create emergency calls to 911 * Process injury images ## Technology we used * MongoDB * Node.js * Express.js * Python * JavaScript * Twilio * Amazon Echo (hardware) * Camera (hardware) * Machine Learning * Computer Vision ## Challenges we ran into * Integrating the Naive Bayesian and Decision Tree Models for our Limited Test Set data. * Run python file in Node.js ## Accomplishments that we're proud of Integrating and building the backend for Alexa. ## What we learned How to Integrate the Back end with the Cloud services and an Intelligent Speech to Text System ## What's next for Dr. Jarvis Larger Data Set and utilzing Deep learning Convolutional Nerual Networks for multiclass classification. High Resolution Camera to be integrated with the System for Image to detect visible skin diseases from a persisting trained data set.
winning
**Finding a problem** Education policy and infrastructure tend to neglect students with accessibility issues. They are oftentimes left on the backburner while funding and resources go into research and strengthening the existing curriculum. Thousands of college students struggle with taking notes in class due to various learning disabilities that make it difficult to process information quickly or write down information in real time. Over the past decade, Offices of Accessible Education (OAE) have been trying to help support these students by hiring student note-takers and increasing ASL translators in classes, but OAE is constrained by limited funding and low interest from students to become notetakers. This problem has been particularly relevant for our TreeHacks group. In the past year, we have become notetakers for our friends because there are not enough OAE notetakers in class. Being note writers gave us insight into what notes are valuable for those who are incredibly bright and capable but struggle to write. This manual process where we take notes for our friends has helped us become closer as friends, but it also reveals a systemic issue of accessible notes for all. Coming into this weekend, we knew note taking was an especially interesting space. GPT3 had also been on our mind as we had recently heard from our neurodivergent friends about how it helped them think about concepts from different perspectives and break down complicated topics. **Failure and revision** Our initial idea was to turn videos into transcripts and feed these transcripts into GPT-3 to create the lecture notes. This idea did not work out because we quickly learned the transcript for a 60-90 minute video was too large to feed into GPT-3. Instead, we decided to incorporate slide data to segment the video and use slide changes to organize the notes into distinct topics. Our overall idea had three parts: extract timestamps the transcript should be split at by detecting slide changes in the video, transcribe the text for each video segment, and pass in each segment of text into a gpt3 model, fine-tuned with prompt engineering and examples of good notes. We ran into challenges every step of the way as we worked with new technologies and dealt with the beast of multi-gigabyte video files. Our main challenge was identifying slide transitions in a video so we could segment the video based on these slide transitions (which signified shifts in topics). We initially started with heuristics-based approaches to identify pixel shifts. We did this by iterating through frames using OpenCV and computing metrics such as the logarithmic sum of the bitwise XORs between images. This approach resulted in several false positives because the compressed video quality was not high enough to distinguish shifts in a few words on the slide. Instead, we trained a neural network using PyTorch on both pairs of frames across slide boundaries and pairs from within the same slide. Our neural net was able to segment videos based on individual slides, giving structure and organization to an unwieldy video file. The final result of this preprocessing step is an array of timestamps where slides change. Next, this array was used to segment the audio input, which we did using Google Cloud’s Speech to Text API. This was initially challenging as we did not have experience with cloud-based services like Google Cloud and struggled to set up the various authentication tokens and permissions. We also ran into the issue of the videos taking a very long time, which we fixed by splitting the video into smaller clips and then implementing multithreading approaches to run the speech to text processes in parallel. **New discoveries** Our greatest discoveries lay in the fine-tuning of our multimodal model. We implemented a variety of prompt engineering techniques to coax our generative language model into producing the type of notes we wanted from it. In order to overcome the limited context size of the GPT-3 model we utilized, we iteratively fed chunks of the video transcript into the OpenAI API at once. We also employed both positive and negative prompt training to incentivize our model to produce output similar to our desired notes in the output latent space. We were careful to manage the external context provided to the model to allow it to focus on the right topics while avoiding extraneous tangents that would be incorrect. Finally, we sternly warned the model to follow our instructions, which did wonders for its obedience. These challenges and solutions seem seamless, but our team was on the brink of not finishing many times throughout Saturday. The worst was around 10 PM. I distinctly remember my eyes slowly closing, a series of crumpled papers scattered nearby the trash can. Each of us was drowning in new frameworks and technologies. We began to question, how could a group of students, barely out of intro-level computer science, think to improve education. The rest of the hour went in a haze until we rallied around a text from a friend who sent us some amazing CS notes we had written for them. Their heartfelt words of encouragement about how our notes had helped them get through the quarter gave us the energy to persevere and finish this project. **Learning about ourselves** We found ourselves, after a good amount of pizza and a bit of caffeine, diving back into documentation for react, google text to speech, and docker. For hours, our eyes grew heavy, but their luster never faded. More troubles arose. There were problems implementing a payment system and never-ending CSS challenges. Ultimately, our love of exploring technologies we were unfamiliar with helped fuel our inner passion. We knew we wanted to integrate Checkbook.io’s unique payments tool, and though we found their API well architectured, we struggled to connect to it from our edge-compute centric application. Checkbook’s documentation was incredibly helpful, however, and we were able to adapt the code that they had written for a NodeJS server-side backend into our browser runtime to avoid needing to spin up an entirely separate finance service. We are thankful to Checkbook.io for the support their team gave us during the event! Finally, at 7 AM, we connected the backend of our website with the fine-tuned gpt3 model. I clicked on CS106B and was greeted with an array of lectures to choose from. After choosing last week’s lecture, a clean set of notes were exported in LaTeX, perfect for me to refer to when working on the PSET later today! We jumped off of the couches we had been sitting on for the last twelve hours and cheered. A phrase bounced inside my mouth like a rubber ball, “I did it!” **Product features** Real time video to notes upload Multithreaded video upload framework Database of lecture notes for popular classes Neural network to organize video into slide segments Multithreaded video to transcript pipeline
## Inspiration We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format. ## What it does "Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content. ## How we built it Our project is comprised of many interconnected components, which we detail below: **Lecture Detection** Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly. **Text Summarization** We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step. **Post-Processing and Formatting** Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point. **Key Concept Suggestions** To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases. **Front-End** The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model. ## Challenges we ran into 1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points. 2. Our NLP model is quite large, which made it difficult to host on cloud platforms ## Accomplishments that we're proud of 1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. 2) Working on an unsolved machine learning problem (lecture simplification) 3) Real-time text analysis to determine new elements ## What we learned 1) First time for multiple members using Flask and doing web development 2) First time using Google Cloud Platform API 3) Running deep learning models makes my laptop run very hot ## What's next for Summaread 1) Improve our summarization model through improving data pre-processing techniques and decreasing run time 2) Adding more functionality to generated outlines for better user experience 3) Allowing for users to set parameters regarding how much the lecture is condensed by
## Inspiration Most students know the feeling of being behind in classes, and Stanford really tries to help us out. We get both lecture slides and videos for many classes, but neither alone is sufficient. Lecture slides are useful for quick information look up, but are often too dense to interpret without explanation. On the other hand, videos fill in these holes in understanding, but are riddled with superfluous information that can take hours to parse through. What if we could combine these two resources to create a fully integrated visual and auditory learning experience? ## What it does Slip establishes a two-way mapping between class videos and slides to allow for a seamless transition between the two. Watch a few slides until the material gets too dense and then click the slide to instantly move to the exact point in the video where that same concept is being explained. By fully integrating classroom resources, Slip allows students to navigate between class notes, slides, and videos with a single click. ## How we built it We collected our source data by extracting all the slides from lecture notes with ImageMagick and key frames from the class video using ffmpeg. After extraction, we use SIFT to identify the slide, if present, in every frame, and OCR (optical character recognition) to see how closely the text in each slide/frame match up. By combining these two metrics, we can compute optimal slide and frame mappings for the entire lecture with 90-95% confidence. ## Challenges we ran into Accuracy is extremely important, but often videos don’t have great captures of the slides. Neither image processing nor OCR alone were enough to reach an accuracy we liked, but they compliment each other very well. OCR is very good for text-heavy slides and image processing is very effective on others. Even using both together, the algorithm still found incorrect mappings much of the time. The big trick for great accuracy was using the knowledge that we have the slides in order. This allows us to not simply look for the best frame for each slide, but the best set of frames for all the slides at once, such that the slides are in order. Optimizing this in a reasonable amount of time required a clever dynamic programming solution, but greatly increased accuracy. ## Accomplishments that we're proud of Definitely accuracy. We came in with very little knowledge of image processing and ended up getting some really good accuracy. We also built a seamless front end that makes it super simple for the user to switch between video and lecture, maximizing productivity. ## What we learned If it can go wrong, it will go wrong. We definitely had issues along the way from libraries with bad documentation, to hard to fun bugs, to being completely unsure of how to proceed, but together we powered through. ## What's next for Slip Improve speed of algorithm for image processing.
winning
## What it's all About 40% of food produced in Canada goes to waste every year. We created CookMe, a smartphone application to help reduce food waste, and allow Canadians to save the money lost on discarding expired groceries. CookMe provides a way to scan receipts from grocery stores to remember what you have in your kitchen and pantry at home. It logs when your groceries were purchased to help you keep track of expiration dates and reminds you when you have food that is approaching its best before date. CookMe also serves as a useful database of your groceries at home. You can use the app to help plan your shopping list and answer the classic question, "Do we have \_\_\_\_\_ at home?" Using its database of your groceries, CookMe also provides suggested recipes based on the ingredients you have in your kitchen, and provides in app access to quick, one-minute recipe videos. For those who are always discarding mouldy bread and sour milk, CookMe is the solution to limit waste, save money, and keep your kitchen stocked. ## How we built it We built CookMe using an adapted hackaton-Agile process which involved 4 hour sprints, regular reprioritization and validation with other hackers, sponsors and organizers. It's an Android app which leverages the ZXing API to scan QR codes on receipts which it uses to store grocery objects in an SQLite database. CookMe also uses YouTube's public API to query data from Tasty's recipe YouTube videos, which it embeds in the application's UI. The entire project was version controlled through Git. ## Challenges we ran into **Finding our Market** Finding our target audience was difficult since the monetization of the app can operate in multiple ways. On one hand, the app can be grown to users who would be interested in a coupon feature. On the other hand, grocery stores may be interested in suggesting products to the customer while they shop, making the product B2B. **Running multiple threads** When making `https` requests through the YouTube API, we had the issue of responses not being received before synchronous intervention. We ended up incorporating an executor service which allowed us to wait for all the tasks to complete before the control signal continued onwards. **Not Time Boxing Certain Tasks** Some original features and UI implementations offer poor value to the customer for the time spent developing them. We ended up dropping some of these features to pursue more valuable ones, however time boxing features is important when working in an environment which thrives on effectuation. ## Accomplishments that we're proud of None of us have worked on an Android application like CookMe. Being able to incorporate an SQL database, embedding YouTube videos, a QR code reader and having a shippable MVP all compose what we are proud of accomplishing during the hackathon. ## What we learned Validation is incredibly important when creating a product from the ground up. Working in an Agile enviroment is great for this and allowed us to go back to the drawing board every few hours to make sure we're heading in the right direction. KISS - We had plans for implementing AR into the product for our hack. Despite how cool AR is, creating a product that focuses on what the customer cares about should always take precedent. We were able to add much more value to the customer through simple tables and databases than we would have if we used AR. ## What's next for CookMe A suggested products feature had positive feedback during the hackathon. Being able to suggest products to a customer so that they can try some unique recipes would be beneficial to both grocery stores and customers. A coupon integration feature would allow customers to save even more money than if they didn't use our application.
It can often be hard to find useful resources and materials when studying or reviewing for a class. It can be even harder to find someone to study with. StudyBees is a platform that helps students solve that problem by pairing up students and allowing to collaborate and share materials and notes. We believe that this powerful tool will allow students to gain a better understanding of their material and will allow them to shine in their studies. ## Inspiration We’ve only just come back to college, and we are already feeling the pinch. Having a platform to instantly connect with a study partner would be a dream come true. ## What it does Connects students and allows them to interact through a chat service, a shared to-do list, and a collaborative canvas and text editor. We also plan to incorporate document sharing through upload and download but were only able to do so on a limited basis within the time constraint. ## How we built it We used Angular 6 to build the entire frontend and we used MongoDB Stitch as a backend service for user authentication and profile retrieval. We hosted the Angular site using S3 and CloudFront for distribution and used AWS EC2 for websockets which were served using Express.js, Node.js, and Socket.io. ## Challenges we ran into We began building the project late, so time was always against us. We also ran into issues using Socket.io to connect users because it was separate from our Stitch backend service. We also had some issues implementing collaborative editing for the text editor and ultimately had to make some compromises in functionality. ## Accomplishments that we're proud of We are incredibly proud of building a fully-fledged application with a beautiful and responsive design. While we have worked with many of these technologies before, we are very happy that our prior experience allowed us to overcome our late start. ## What we learned We learned a lot about MongoDB Stitch, which is an incredibly powerful backend tool that we look forward to using in the future. We also were able to explore more in-depth uses of Angular and Socket.io ## What's next for StudyBees We hope to add some more functionality to allow users to save and share documents and we hope to improve the collaborative editing we have now.
## Inspiration It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis? Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year. About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year. But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world. ## What it does PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers. ## How we built it We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map. ## Challenges we ran into React to properly display Storing multiple values into database at once (food item, exp date) How to display all firebase elements (doing proof of concept with console.log) Donated food being displayed before even clicking the button (fixed by using function for onclick here) Getting location of the user to be accessed and stored, not just longtitude/latitude Needing to log day that a food was gotten Deleting an item when expired. Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore) How to delete the food from the Firebase (but weird bc of weird doc ID) Predicting when non-labeled foods expire. (using OpenAI) ## Accomplishments that we're proud of * We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date. * Integrating the API that helps us figure out our location from the latitudes and longitudes. * Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations. ## What we learned We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away. ## What's next for PantryPuzzle We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want. We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date. We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
losing
## Inspiration There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life. ## What it does We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him. ## How we built it We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it. ## Challenges we ran into Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data. ## Accomplishments that we're proud of We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects. ## What we learned We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking. ## What's next for BlindSpot In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device.
## Problem Definition Online and mobile banking has revolutionized the way we manage our money. We can send money, transfer between accounts, and even deposit cheques from the phone in our pocket. The challenge with introducing convenient banking features is ensuring the customer’s information and finances remain secure throughout the whole process without introducing unnecessary complexity. To accomplish such a large feat, banks use state-of-the-art security technology such as biometric authentication on mobile applications and advanced web protocols. However, there is one feature of online and mobile banking which we believe has the potential to compromise the user’s private data. This is that the only way to download bank statements from the bank website is as a PDF document which is not secure. The bank statement contains a lot of private user information, such as transaction history, account numbers and balance, and personal information such as address. All this information is being stored in plain text on the local storage of the user’s device once it is downloaded, creating a possibility for data theft if the customer’s computer is compromised. ## The Solution Our proposed solution is the Bank Statement Encryption (BSE) Protocol. BSE is our method of securing a user's bank statements while still being able to keep them locally on their machine at all times, without needing internet access. This is perfect as it will allow the user to still feel like they have ownership over their bank statements while at the same time reducing the risk of their private information and account details being compromised. How it works is that instead of downloading an unencrypted PDF, the user will be able to download an encrypted PDF through their mobile app. When they do this, the password for this encrypted PDF is also sent to them but is stored within the secure Keystore or Keychain, depending on the platform. The user never actually has to or is even able to look at the password, this will stop them from just writing it down somewhere encrypted on the phone and compromising the security. To view the PDF bank statement, the user simply needs to go back into their mobile app and provide bio-authentication, this then allows the app to read the password from the secure storage which then allows the app to open the PDF and show it to the user. Doing it this way, the plaintext bank statement is never actually stored on the phone but the user is still able to save them on their phone and access them without internet access. ## Implementation Details The implementation can be broken down into serval parts: sending the user the encrypted statement, storing the information used to decrypt the statement (whether that be a password or a key), storing that information safely on the device, and allowing the user to access it after via biometric authentication. The biggest design choice was deciding between using the standard included password-based encryption that PDF’s support, or using your own in-house method of encrypting the files. For our implementation, we used the default encryption that the PDF file format supports because it was simpler for us to put together in the short time we had. However, the actual difference in implementation between the two methods is very minor, just changing what you store in the Keystore or Keychain on the user's device. We created a mock banking app that allowed a user to download their bank statements and keep them saved on their phone. When the user requests to download a statement, two things are securely sent to them, an encrypted PDF containing their statement, and the password to decrypt that PDF. In our implementation, we thought actually sending something from a server would be unnecessary so we streamlined the process by just storing the encrypted PDFs and their passwords in the project. Obviously, if a real bank were to do this, they would have to actually send the user this information. The password is then saved into their phone's Keystore or Keychain so that it can be secured. Importantly, the user can’t interact with this password at all, stopping them from compromising the security by storing it in plaintext somewhere on their device. After the user downloads their statement, they are then able to return to the app at any time to view any of their previously downloaded statements. To view them, they have to provide bio-authentication which then allows the password to be read from the secure container on their phone. During this process, the unencrypted PDF is never saved onto the phone, only ever loaded into memory. If an actual bank were to implement this idea, an important feature that the bank's mobile app would have to incorporate would be changing its offline behavior in the following way. Instead of being prompted to log in, which they couldn’t do anyway because of their lack of internet connection, they will be brought right to the screen which will allow them to view their previously downloaded statements, after the proper authentication of course. ## Resources Used The demo of our app was made using flutter, and android studio. Flutter is an open-source UI SDK that allows users to develop applications for a variety of platforms, specifically for this project, Android. With Flutter, we were able to utilize a variety of open source packages created for flutter that aided in the construction of the application demo. These packages include: google\_fonts: Used to gain access to a variety of fonts for the application flutter\_svg: Used to handle SVG files we used for our logos local\_auth: Used to access biometric authentication on android device fliutter\_secure\_storage: Used to store the pdf passwords in Keystore Flutter\_pdf\_viewer: Used to handle viewing pdfs ## Challenges we ran into The challenges that we ran into while designing this demo were plentiful, but three major challenges truly challenged us and forced us to deepen our understanding of the concepts we were attempting to tackle. First of all, learning flutter and dart was an extremely long task. We had to learn how flutter and dart worked in order to create our demo. The fundamental way that the SDK worked was different from what we were used to as developers. We had to overcome this by engaging in long and detailed tutorials that taught us basics and built our knowledge up before we even wrote a single line of code. The second challenge we faced was finding pdf packages that worked. Because flutter uses open source packages in order to expand its libraries, there were many options for a package that handled pdf however as we soon realized, very few met our specific needs. On top of this, the documentation for many of these packages often lackluster, requiring us to look at the actual libraries and trying to understand what features they possessed. This coupled with the sheer number of packages we tried helped us become more comfortable with using flutter and dart. Finally, we were compelled to learn how encryption works and how it is implemented in existing protocols in order to create this project. This was especially true for the PDF file format, who's official implementation of encryption we had to learn how to use and integrate into our program. ## What we learned To create a working demo, we had to familiarize ourselves with the intricacies of mobile app development. This involved learning how to design a pleasing and convenient UI for users as well as having our important functions run behind the scenes to prepare our data. This app aided us in expanding our knowledge and capabilities in both front-end and back-end development. Learning to create this mobile app also forced us to learn flutter, as well as the language dart. No one in our group knew how to use these tools when we first started. As such, we had to learn from the ground, first how to program in dart, and use flutter with dart to actually create our app. We had to learn about the different packages, classes, and overall project structure that culminated in an understanding of the SDK and language. Finally, we learned the basics of file encryption. We explored many avenues when designing a means for our demo to decrypt, read and encrypt pdf files. We learned about the AES and the means by which it encrypts files and stores keys. It helped us reformulate our understanding of what security is. ## What's next for Bank Statement Encryption (BSE) Protocol This protocol was designed to protect against identity theft by patching a hole in the existing bank security network and so, ideally, we would like to work with banks to implement our idea and service into their existing apps. This would increase the security of their users' information and provide them more privacy all while continuing to allow them to take control of their data. However, regardless of whether or not we were able to partner with a real bank, the next major update for BSE is to incorporate a web app that utilizes this protocol since not all banking is done on smartphones. Increasing the accessibility of BSE would be integral to its growth and its adoption by banks and users alike. Alternatively, if no bank wants to incorporate our idea, we could run the app as a service that anyone would be able to download on their phone to protect their bank statements or even other sensitive information. All of these provide a concrete example of the useful nature of this protocol.
## Inspiration What inspired us was we wanted to make an innovative solution which can have a big impact on people's lives. Most accessibility devices for the visually impaired are text to speech based which is not ideal for people who may be both visually and auditorily impaired (such as the elderly). To put yourself in someone else's shoes is important, and we feel that if we can give the visually impaired a helping hand, it would be an honor. ## What it does The proof of concept we built is separated in two components. The first is an image processing solution which uses OpenCV and Tesseract to act as an OCR by having an image input and creating a text output. This text would then be used as an input to the second part, which is a working 2 by 3 that converts any text into a braille output, and then vibrate specific servo motors to represent the braille, with a half second delay between letters. The outputs were then modified for servo motors which provide tactile feedback. ## How we built it We built this project using an Arduino Uno, six LEDs, six servo motors, and a python file that does the image processing using OpenCV and Tesseract. ## Challenges we ran into Besides syntax errors, on the LED side of things there were challenges in converting the text to braille. Once that was overcome, and after some simple troubleshooting for menial errors, like type comparisons, this part of the project was completed. In terms of the image processing, getting the algorithm to properly process the text was the main challenge. ## Accomplishments that we're proud of We are proud of having completed a proof of concept, which we have isolated in two components. Consolidating these two parts is only a matter of more simple work, but these two working components are the fundamental core of the project we consider it be a start of something revolutionary. ## What we learned We learned to iterate quickly and implement lateral thinking. Instead of being stuck in a small paradigm of thought, we learned to be more creative and find alternative solutions that we might have not initially considered. ## What's next for Helping Hand * Arrange everything in one android app, so the product is cable of mobile use. * Develop neural network so that it will throw out false text recognitions (usually look like a few characters without any meaning). * Provide API that will be able to connect our glove to other apps, where the user for example may read messages. * Consolidate the completed project components, which is to implement Bluetooth communication between a laptop processing the images, using OpenCV & Tesseract, and the Arduino Uno which actuates the servos. * Furthermore, we must design the actual glove product, implement wire management, an armband holder for the uno with a battery pack, and position the servos.
partial
## Inspiration The inspiration comes from Politik - this is a version 2.0 with significant improvements! We're passionate about politics and making sure that others can participate and more directly communicate with their politicians. ## What it does We show you the bills your House representative has introduced, and allow you to vote on them. We've created token based auth for sign-up and login, built a backend that allows us to reformat and modify 'bill' and 'representative' objects, and done a complete redesign of the frontend to be much simpler and more user friendly. We have polished the fax route (that can fax a message embedded in a formal letter directly to your representative!) and have begun implementation of Expo push notifications! ## How we built it We used React Native on the frontend, Node.js / Express.js on the backend, and MongoDB hosted by mLab. We created the backend almost completely from scratch and had no DB before this (just fake data pulled from a very rate limited API). We used Sketch to help us think through the redesign of the application. ## Challenges we ran into 1) Dealing with the nuances of MongoDB as we were handling an abnormally large amount of data and navigating through a bunch of promises before writing a final modified object into our Mongo collection. The mentors at the mLab booth were really helpful, giving us more creative ways to map relationships between our data. 2) Designing an intuitive, easily navigable user interface. None of us have formal design training but we realize that in politics more than anything else, the UX cannot be confusing - complicated legislation must be organized and displayed in a way that lends itself to greater understanding. ## Accomplishments that we're proud of We're proud of the amount of functionality we were able to successfully implement in such a short period of time. The sheer amount of code we wrote was quite a bit, and considering the complexity of designing a database schema and authentication system, we're proud that we were able to even finish! ## What we learned There is no such thing as 'too much' caffeine. ## What's next for Speak Up Ali plans to continue this and combine it with Politik. Now that authentication and messaging your representative works - I just need to refactor the code and fully test it before releasing on the app store.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
## Inspiration To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process. ## What it does Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification. ## How we built it The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application. ## Challenges I ran into * Integration for seamless formation of app between front and back-end merge * Using flask as an intermediary to act as transitional fit for back-end * Understanding incorporation, use, and capability of blockchain for security in the purpose applied to ## Accomplishments that I'm proud of * Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern ## What I learned * Application of ARK.io blockchain and security protocols * The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys * Utilizing JQuery to compile a comprehensive program ## What's next for Block Vote Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative.
partial
## Inspiration Our journey with SignEase began with a vision: to foster inclusivity and connection by breaking down communication barriers. Inspired by the universal language of sign, we aimed to create a platform where all could communicate freely. Recognizing its transformative power, we embarked on a mission to build SignEase as a beacon of accessibility and inclusivity. ## What it does SignEase represents our commitment to bridging communication gaps using innovative technology. It's a dynamic web application designed for learning and mastering sign language. With a robust backend powered by Python Flask and sophisticated frontend architecture using React.js, SignEase offers an intuitive and immersive learning experience. Through the integration of Three.js, SignEase enables precise simulation of hand movements, allowing users to interact with dynamic representations of sign language gestures. This provides a comprehensive understanding of hand movements, enriching the learning process with interactive visualizations. Additionally, SignEase incorporates cutting-edge hand recognition technology, offering seamless interaction and real-time feedback as users practice and refine their signing skills. By storing user data and facilitating interactive learning experiences, SignEase fosters community connection and inclusivity through the universal language of sign. ## How we built it The development journey of SignEase was marked by a meticulous blend of innovation, collaboration, and technical expertise. Our team embarked on the journey by meticulously architecting the backend infrastructure using Python Flask, laying the foundation for efficient data management and seamless integration of advanced features. Leveraging the versatility of React.js, we crafted an elegant and responsive frontend interface, ensuring a seamless user experience across diverse devices and platforms. The integration of Three.js was crucial in enabling SignEase to simulate hand movements with precision, enriching the learning process with interactive visualizations of sign language gestures. This allowed users to interact with dynamic representations, enhancing their understanding and mastery of sign language. Moreover, SignEase incorporated cutting-edge hand recognition technology, which was achieved through rigorous testing and optimization to ensure seamless interaction and real-time feedback as users practiced and refined their signing skills. ## Challenges we ran into During the development of SignEase, we encountered several challenges that tested our problem-solving skills and perseverance. One significant hurdle we faced was optimizing the hand recognition algorithms to ensure efficient and accurate recognition of sign language gestures. Achieving seamless interaction and real-time feedback required extensive testing and refinement of the algorithms, which demanded significant time and effort from our team. Additionally, optimizing the performance of the 3D hand simulation posed another challenge. Ensuring smooth rendering of dynamic hand movements while maintaining responsiveness across different devices and browsers required careful optimization and fine-tuning of the Three.js integration. Integrating multiple components, including the backend infrastructure, frontend interface, hand recognition technology, and 3D rendering capabilities, also presented complexities. Coordinating the development efforts across different areas while maintaining consistency and compatibility required effective communication and collaboration within our team. Despite these challenges, our team remained dedicated and resilient, continuously iterating and refining our approach to overcome obstacles and deliver a high-quality product. ## Accomplishments that we're proud of We're immensely proud of creating SignEase, a dynamic web application dedicated to facilitating the learning and mastery of sign language. This accomplishment required meticulous planning, innovative design, and seamless integration of cutting-edge technologies, empowering users to communicate more inclusively. Additionally, the integration of Three.js for precise simulation of hand movements and cutting-edge hand recognition technology for real-time feedback represents significant milestones in our journey. These achievements deepen the learning experience, providing interactive visualizations and seamless interaction, reinforcing our commitment to innovation and user experience in fostering inclusivity and connection within communities. ## What we learned Developing SignEase has been a rich learning experience, deepening our understanding of user-centred design and the significance of creating accessible learning environments. By prioritizing the diverse needs of our users, we honed our ability to craft intuitive interfaces that cater to a wide range of learners. Moreover, integrating complex technologies like hand recognition algorithms and 3D rendering required meticulous planning and problem-solving skills. Overcoming these technical challenges expanded our expertise and reinforced the importance of adaptability and innovation in software development. Furthermore, SignEase highlighted the profound impact of technology in fostering inclusivity and connection within communities. Witnessing the transformative power of our platform reaffirmed our commitment to leveraging technology for positive social change, inspiring us to continue innovating for the greater good. ## What's next for SignEase Looking ahead, we envision several exciting developments for SignEase that will further enhance the learning experience and foster greater inclusivity. One of our primary objectives is to incorporate lifelike 3D models into SignEase, providing users with an even more immersive and realistic learning environment. By simulating hand movements with unparalleled realism, users will have a deeper understanding of sign language gestures, enriching their learning journey. Additionally, we aim to expand SignEase's feature set to include personalized learning paths, interactive quizzes, and gamified exercises. These enhancements will offer users a more tailored and engaging learning experience, catering to individual preferences and learning styles. Furthermore, we plan to explore opportunities for collaboration with experts in sign language education to continually improve and refine SignEase's effectiveness as a learning tool. By leveraging their insights and expertise, we can ensure that SignEase remains at the forefront of sign language education, empowering users to communicate more confidently and authentically. Overall, the future holds immense potential for SignEase as we continue to innovate and evolve, striving to make sign language learning more accessible, engaging, and impactful for users worldwide.
## Inspiration Google magenta and the rich history of procedural generation, including artists like Steve Reich and Brian Eno. ## What it does It is essentially an instrument that plays an instrument, the musician controls the meta-instrument which in turn plays different kinds of melodies on the preexisting synths built into the code. ## How we built it We used Javascript for all the programming, with some CSS thrown in for the website. A lot of statistics (multivariate gaussians etc) and matrix mathematics went into making this work. Three.js was used for the 3d visual controller and tone.js was used for the audio rendition. math.js was used for standard mathematical tools. Samples were obtained using an old open source digital audio workstation called Caustic. The site is hosted on Google AppEngine. ## Challenges I ran into Where do I start? Everything that could go wrong, did. For starters we missed the stop on the train, and then had to walk like 30 minutes in the freezing cold once we did eventually get to Princeton. Then, we found that tone.js's synths sounded kinda bad so we had to finick around and get samples. The basic logic of the transition was fundamentally flawed which we only discovered 75% of the way through and so on and so forth. ## Accomplishments that I'm proud of The whole thing! It combines sophisticated mathematics, good design and an interesting idea into a genuinely original and exciting project. ## What I learned three.js, tensor mathematics, music theory, some fractal stuff (for some reason), app engine, gcp usage, version control. ## What's next for Bulbul-lite Provide finer control over transition probabilities Add more parameters into selection matrix Explore how to use predictive capabilities to influence timbre and mimic genre music Use ML to fit this model of generation with curated tracks and generate variations within the meta of a genre We want to compete with Magenta. We believe our model is more expressive with sparse data and computing power Integration with DAWs and Game Development tools are immediate opportunities for this projects development Spice up the GUI
## Inspiration We have people in our family who have hearing impairments and use ASL to communicate however, some of us realise that we do not know ASL properly and are in the process of learning. We wanted to make this app to raise awareness for people to learn ASL to bridge the gap between ableism and disableism. ## What it does This app uses a convolutional neural net using keras to recognise the image in front of it and convert ASL to text and then if the user wants, convert that text to speech for more convenience. The app also helps convert speech to text for people who are visually impaired as the app recognises background noise, adjusts for it and then processes the audio adjusting for the background noise to recognise the audio and convert that to speech. We also have system verification in place for the user to avoid the hassle of dealing with downloading dependencies. ## How we built it We built the model using keras and convolution neural net with softmax activation for multiclass classification and adam loss. This model is then used on a software made using the following libraries/frameworks: tkinter, pyttsx3, cv2 and speech recognition. These helped us show the camera feed, use those frames for recognising the ASL, converting text to speech and speech to text. This was all made to ensure cross-platformness among different devices with Python. ## Challenges we ran into The biggest challenge we ran into was ensuring that tkinter works cohesively among different layouts and make the os paths work relative to the current path. There were other issues with threads and deadlocks to ensure that speech recognition and text to speech worked on different threads. ## Accomplishments that we're proud of Being able to make a working product that fits our requirements of the apps ## What we learned AI and models, how to implement the model prediction in real time with our app, solve major git merging issues, ## What's next for Sign collaborative Integration into different API environments, a more accurate machine learning model
partial
# Project Incognito: The Mirror of Your Digital Footprint 🕵️‍♂️ ### "With great power comes great responsibility." - A glimpse into the abyss of personal data accessibility. 🔍 --- ## Overview 📝 **Incognito** is not just a project; it's a wake-up call 🚨. Inspired by the unnerving concept of identity access in the anime "Death Note," we developed a system that reflects the chilling ease with which personal information can be extracted from a mere photograph. **Our Mission:** To simulate the startling reality of data vulnerability and empower individuals with the knowledge to protect their digital identity. 🛡️ ## How It Works 🧩 ![deepFaceValidation](https://i.postimg.cc/sD77TmZR/deep-Face-Time-Compare.jpg) 1. **The Photo**: The journey into your digital persona begins with a snapshot. Your face is the key. 📸 2. **The Intel Dev Cloud Processing**: This image is processed through sophisticated algorithms on the Intel Dev Cloud. ☁️ 3. **DeepFace Validation**: Our DeepFace model, handpicked for its superior performance, compares your image against our AWS-stored database. 🤖 4. **LinkedIn and Identity Confirmation**: Names associated with your facial features are cross-referenced with top LinkedIn profiles using DeepFace's `verify` functionality. 🔗 5. **Together AI's JSON Magic**: A JSON packed with Personally Identifiable Information (PII) is conjured, setting the stage for the next act. ✨ 6. **Melissa's Insight**: Presenting the Melissa API with the Personator endpoint that divulges deeper details - addresses, income, spouse and offspring names, all from the initial data seed. 👩‍💼 7. **Together AI Summarization**: The raw data, now a narrative of your digital footprint, is summarized for impact. 📊 8. **Data Privacy Rights**: In applicable jurisdictions, you have the option to demand data removal.🔗🛠️ 9. **JUSTICE!!**: This in the end is powered by Fetch AI and the chaining it provides. ⛓️ --- ![Judge Tags in Database](https://i.postimg.cc/B6BR5Hc5/Judge-List.png) ## The Darker Side 🌘 Our research ventured into the shadows, retrieving public data of individuals tagged as 'judges' on Slack. But fear not, for we tread lightly and ethically, using only data from consenting participants (our team :') ). 👥 ## The Even Darker Side 🌑 What we've uncovered is merely the tip of the iceberg. Time-bound, we've scratched the surface of available data. Imagine the potential depths. We present this to stir awareness, not fear. 🧊 ## The Beacon of Hope 🏮 Our core ethos is solution-centric. We've abstained from exploiting judge data or misusing PII. Instead, we're expanding to trace the origin of data points, fostering transparency and control. ✨ --- ## Closing Thoughts 💡 Incognito stands as a testament and a solution to the digital age's paradox of accessibility. It's a reminder and a resource, urging vigilance and offering tools for data sovereignty. 🌐 ---
## Inspiration 🤔 The inspiration behind Inclusee came from our desire to make digital design accessible to everyone, regardless of their visual abilities. We recognized that many design tools lack built-in accessibility features, making it challenging for individuals with low vision, dyslexia, and other visual impairments to create and enjoy visually appealing content. Our goal was to bridge this gap and ensure that everyone can see and create beautiful designs. ## What it does 📙 Inclusee is an accessibility addon for Adobe Express that helps designers ensure their creations are accessible to individuals with low vision, dyslexia, and other visual impairments. The addon analyzes the colors, fonts, and layouts used in a design, providing real-time feedback and suggestions to improve accessibility. Inclusee highlights areas that need adjustments and offers alternatives that comply with accessibility standards, ensuring that all users can appreciate and interact with the content. ## How we built it 🚧 We built Inclusee using the Adobe Express Add-On SDK, leveraging its powerful capabilities to integrate seamlessly with the design tool. Our team used a combination of JavaScript and React to develop the addon interface. We implemented color analysis algorithms to assess contrast ratios and detect color blindness issues. Additionally, we incorporated text analysis to identify and suggest dyslexia-friendly changes. Our development process included rigorous testing to ensure the addon works smoothly across different devices and platforms. ## Challenges we ran into 🤯 One of the largest challenges we faced was working with the Adobe Express Add-On SDK. As it is a relatively new tool, there are limited usage examples and documentation available. This made it difficult to find guidance and best practices for developing our addon. We had to rely heavily on experimentation and reverse-engineering to understand how to effectively utilize the SDK's features. Additionally, the SDK is still being fleshed out and new features are continuously being added. This meant that certain functionalities we wanted to implement were not yet available, forcing us to find creative workarounds or adjust our plans. The evolving nature of the SDK also posed challenges in terms of stability and compatibility, as updates could potentially introduce changes that affected our addon. Despite these hurdles, we persevered and were able to successfully integrate Inclusee with Adobe Express. Our experience working with the SDK has given us valuable insights and we are excited to see how it evolves and improves in the future. ## Accomplishments that we're proud of 🥹 We are proud of creating a tool that makes digital design more inclusive and accessible. Our addon not only helps designers create accessible content but also raises awareness about the importance of accessibility in design. We successfully integrated Inclusee with Adobe Express, providing a seamless user experience. Additionally, our color and font analysis algorithms are robust and accurate, offering valuable suggestions to improve design accessibility. ## What we learned 🧑‍🎓 Throughout the development of Inclusee, we learned a great deal about accessibility standards and best practices in design. We gained insights into the challenges faced by individuals with visual impairments and the importance of inclusive design. Our team also enhanced our skills in using the Adobe Express Add-On SDK and optimizing performance for real-time applications. ## What's next for Inclusee 👀 * Expand Accessibility Features: Add more features, such as checking for proper text hierarchy and ensuring navigable layouts for screen readers. * Collaboration with Accessibility Experts: Work with experts to gather feedback and continuously improve the addon. * User Feedback Integration: Collect and implement user feedback to enhance functionality and usability. * Partnerships with Other Design Tools: Explore partnerships to extend Inclusee's reach and impact, promoting inclusive design across various platforms. * Educational Resources: Develop tutorials and resources to educate designers about accessibility best practices.
•⁠ ⁠Inspiration The inspiration for digifoot.ai stemmed from the growing concern about digital footprints and online presence in today's world. With the increasing use of social media, individuals often overlook the implications of their online activities. We aimed to create a tool that not only helps users understand their digital presence but also provides insights into how they can improve it. •⁠ ⁠What it does digifoot.ai is a comprehensive platform that analyzes users' social media accounts, specifically Instagram and Facebook. It aggregates data such as posts, followers, and bios, and utilizes AI to provide insights on their digital footprint. The platform evaluates images and content from users’ social media profiles to ensure there’s nothing harmful or inappropriate, helping users maintain a positive online presence. •⁠ ⁠How we built it We built digifoot.ai using Next.js and React for the frontend and Node.js for the backend. The application integrates with the Instagram and Facebook Graph APIs to fetch user data securely. We utilized OpenAI's API for generating insights based on the collected social media data. •⁠ ⁠Challenges we ran into API Authentication and Rate Limits: Managing API authentication for both Instagram and OpenAI was complex. We had to ensure secure access to user data while adhering to rate limits imposed by these APIs. This required us to optimize our data-fetching strategies to avoid hitting these limits. Integrating Image and Text Analysis: We aimed to analyze both images and captions from Instagram posts using the OpenAI API's capabilities. However, integrating image analysis required us to understand how to format requests correctly, especially since the OpenAI API processes images differently than text. The challenge was in effectively combining image inputs with textual data in a way that allowed the AI to provide meaningful insights based on both types of content. •⁠ ⁠Accomplishments that we're proud of We are proud of successfully creating a user-friendly interface that allows users to connect their social media accounts seamlessly. The integration of AI-driven analysis provides valuable feedback on their digital presence. Moreover, we developed a robust backend that handles data securely while complying with privacy regulations. •⁠ ⁠What we learned Throughout this project, we learned valuable lessons about API integrations and the importance of user privacy and data security. We gained insights into how AI can enhance user experience by providing personalized feedback based on real-time data analysis. Additionally, we improved our skills in full-stack development and project management. •⁠ ⁠What's next for digifoot.ai Looking ahead, we plan to enhance digifoot.ai by incorporating more social media platforms for broader analysis capabilities. We aim to refine our AI algorithms to provide even more personalized insights. Additionally, we are exploring partnerships with educational institutions to promote digital literacy and responsible online behavior among students.
losing
## Inspiration CookHack was inspired by the fact that students in university are always struggling with the responsibility of cooking their next healthy and nutritious meal. However, most of the time, we as students are always too busy to decide and learn how to cook basic meals, and we resort to the easy route and start ordering Uber Eats or Skip the Dishes. Now, the goal with CookHack was to eliminate the mental resistance and make the process of cooking healthy and delicious meals at home as streamlined as possible while sharing the process online. ## What it does CookHack, in a nutshell, is a full-stack web application that provides users with the ability to log in to a personalized account to browse a catalog of 50 different recipes from our database and receive simple step-by-step instructions on how to cook delicious homemade dishes. CookHack also provides the ability for users to add the ingredients that they have readily available and start cooking recipes with those associated ingredients. Lastly, CookHack encourages the idea of interconnection by sharing their cooking experiences online by allowing users to post updates and blog forums about their cooking adventures. ## How we built it The web application was built using the following tech stack: React, MongoDB, Firebase, and Flask. The frontend was developed with React to make the site fast and performant for the web application and allow for dynamic data to be passed to and from the backend server built with Flask. Flask connects to MongoDB to store our recipe documents on the backend, and Flask essentially serves as the delivery system for the recipes between MongoDB and React. For our authentication, Firebase was used to implement user authentication using Firebase Auth, and Firestore was used for storing and updating documents about the blog/forum posts on the site. Lastly, the Hammer of the Gods API was connected to the frontend, allowing us to use machine learning image detection. ## Challenges we ran into * Lack of knowledge with Flask and how it works together with react. * Implementing the user ingredients and sending back available recipes * Had issues with the backend * Developing the review page * Implementing HoTG API ## Accomplishments that we're proud of * The frontend UI and UX design for the site * How to use Flask and React together * The successful transfer of data flow between frontend, backend, and the database * How to create a "forum" page in react * The implementation of Hammer of the Gods API * The overall functionality of the project ## What we learned * How to setup Flask backend server * How to use Figma and do UI and UX design * How to implement Hammer of the Gods API * How to make a RESTFUL API * How to create a forum page * How to create a login system * How to implement Firebase Auth * How to implement Firestore * How to use MongoDB ## What's next for CookHack * Fix any nit-picky things on each web page * Make sure all the functionality works reliably * Write error checking code to prevent the site from crashing due to unloaded data * Add animations to the frontend UI * Allow users to have more interconnections by allowing others to share their own recipes to the database * Make sure all the images have the same size proportions
# Omakase *"I'll leave it up to you"* ## Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? ## What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. ## What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. ## How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. ## What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. ## Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. ## Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
## Inspiration While there are several applications that use OCR to read receipts, few take the leap towards informing consumers on their purchase decisions. We decided to capitalize on this gap: we currently provide information to customers about the healthiness of the food they purchase at grocery stores by analyzing receipts. In order to encourage healthy eating, we are also donating a portion of the total value of healthy food to a food-related non-profit charity in the United States or abroad. ## What it does Our application uses Optical Character Recognition (OCR) to capture items and their respective prices on scanned receipts. We then parse through these words and numbers using an advanced Natural Language Processing (NLP) algorithm to match grocery items with its nutritional values from a database. By analyzing the amount of calories, fats, saturates, sugars, and sodium in each of these grocery items, we determine if the food is relatively healthy or unhealthy. Then, we calculate the amount of money spent on healthy and unhealthy foods, and donate a portion of the total healthy values to a food-related charity. In the future, we plan to run analytics on receipts from other industries, including retail, clothing, wellness, and education to provide additional information on personal spending habits. ## How We Built It We use AWS Textract and Instabase API for OCR to analyze the words and prices in receipts. After parsing out the purchases and prices in Python, we used Levenshtein distance optimization for text classification to associate grocery purchases with nutritional information from an online database. Our algorithm utilizes Pandas to sort nutritional facts of food and determine if grocery items are healthy or unhealthy by calculating a “healthiness” factor based on calories, fats, saturates, sugars, and sodium. Ultimately, we output the amount of money spent in a given month on healthy and unhealthy food. ## Challenges We Ran Into Our product relies heavily on utilizing the capabilities of OCR APIs such as Instabase and AWS Textract to parse the receipts that we use as our dataset. While both of these APIs have been developed on finely-tuned algorithms, the accuracy of parsing from OCR was lower than desired due to abbreviations for items on receipts, brand names, and low resolution images. As a result, we were forced to dedicate a significant amount of time to augment abbreviations of words, and then match them to a large nutritional dataset. ## Accomplishments That We're Proud Of Project Horus has the capability to utilize powerful APIs from both Instabase or AWS to solve the complex OCR problem of receipt parsing. By diversifying our software, we were able to glean useful information and higher accuracy from both services to further strengthen the project itself, which leaves us with a unique dual capability. We are exceptionally satisfied with our solution’s food health classification. While our algorithm does not always identify the exact same food item on the receipt due to truncation and OCR inaccuracy, it still matches items to substitutes with similar nutritional information. ## What We Learned Through this project, the team gained experience with developing on APIS from Amazon Web Services. We found Amazon Textract extremely powerful and integral to our work of reading receipts. We were also exposed to the power of natural language processing, and its applications in bringing ML solutions to everyday life. Finally, we learned about combining multiple algorithms in a sequential order to solve complex problems. This placed an emphasis on modularity, communication, and documentation. ## The Future Of Project Horus We plan on using our application and algorithm to provide analytics on receipts from outside of the grocery industry, including the clothing, technology, wellness, education industries to improve spending decisions among the average consumers. Additionally, this technology can be applied to manage the finances of startups and analyze the spending of small businesses in their early stages. Finally, we can improve the individual components of our model to increase accuracy, particularly text classification.
partial
## Inspiration Ever wish you didn’t need to purchase a stylus to handwrite your digital notes? Each person at some point hasn’t had the free hands to touch their keyboard. Whether you are a student learning to type or a parent juggling many tasks, sometimes a keyboard and stylus are not accessible. We believe the future of technology won’t even need to touch anything in order to take notes. HoverTouch utilizes touchless drawings and converts your (finger)written notes to typed text! We also have a text to speech function that is Google adjacent. ## What it does Using your index finger as a touchless stylus, you can write new words and undo previous strokes, similar to features on popular note-taking apps like Goodnotes and OneNote. As a result, users can eat a slice of pizza or hold another device in hand while achieving their goal. HoverTouch tackles efficiency, convenience, and retention all in one. ## How we built it Our pre-trained model from media pipe works in tandem with an Arduino nano, flex sensors, and resistors to track your index finger’s drawings. Once complete, you can tap your pinky to your thumb and HoverTouch captures a screenshot of your notes as a JPG. Afterward, the JPG undergoes a masking process where it is converted to a black and white picture. The blue ink (from the user’s pen strokes) becomes black and all other components of the screenshot such as the background become white. With our game-changing Google Cloud Vision API, custom ML model, and vertex AI vision, it reads the API and converts your text to be displayed on our web browser application. ## Challenges we ran into Given that this was our first hackathon, we had to make many decisions regarding feasibility of our ideas and researching ways to implement them. In addition, this entire event has been an ongoing learning process where we have felt so many emotions — confusion, frustration, and excitement. This truly tested our grit but we persevered by uplifting one another’s spirits, recognizing our strengths, and helping each other out wherever we could. One challenge we faced was importing the Google Cloud Vision API. For example, we learned that we were misusing the terminal and our disorganized downloads made it difficult to integrate the software with our backend components. Secondly, while developing the hand tracking system, we struggled with producing functional Python lists. We wanted to make line strokes when the index finger traced thin air, but we eventually transitioned to using dots instead to achieve the same outcome. ## Accomplishments that we're proud of Ultimately, we are proud to have a working prototype that combines high-level knowledge and a solution with significance to the real world. Imagine how many students, parents, friends, in settings like your home, classroom, and workplace could benefit from HoverTouch's hands free writing technology. This was the first hackathon for ¾ of our team, so we are thrilled to have undergone a time-bounded competition and all the stages of software development (ideation, designing, prototyping, etc.) toward a final product. We worked with many cutting-edge softwares and hardwares despite having zero experience before the hackathon. In terms of technicals, we were able to develop varying thickness of the pen strokes based on the pressure of the index finger. This means you could write in a calligraphy style and it would be translated from image to text in the same manner. ## What we learned This past weekend we learned that our **collaborative** efforts led to the best outcomes as our teamwork motivated us to preserve even in the face of adversity. Our continued **curiosity** led to novel ideas and encouraged new ways of thinking given our vastly different skill sets. ## What's next for HoverTouch In the short term, we would like to develop shape recognition. This is similar to Goodnotes feature where a hand-drawn square or circle automatically corrects to perfection. In the long term, we want to integrate our software into web-conferencing applications like Zoom. We initially tried to do this using WebRTC, something we were unfamiliar with, but the Zoom SDK had many complexities that were beyond our scope of knowledge and exceeded the amount of time we could spend on this stage. ### [HoverTouch Website](hoverpoggers.tech)
## Inspiration We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library. ## What it does A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability . ## How we built it We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library. ## Challenges we ran into We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack. ## Accomplishments that we are proud of We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off! ## What we learned Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools. ## What's next for desk.lib If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier.
## Inspiration Our teammate, Olivia Yong and her family members have suffered brain injuries. Upon the discussion of the detrimental impacts of delayed diagnosis and treatment for concussions and diseases of the brain, we began researching how we could help those with Alzheimer's disease. ## What it does RECOLLECT allows you to locate your loved ones that have Alzheimer's disease. ## How we built it Twilio API used for location tracking Firebase used for user database User interface made with ReactJS ## Challenges we ran into GPS accuracy in a small range was our biggest challenge! We walked around the space we were working in quite a bit! It was our first time working with GPS, but it was also one of the most fun parts of our learning process. ## Accomplishments that we're proud of Utilizing Google's Geolocation Maps API to track location real-time! Watching our coordinates change as we walked around while testing our product was fascinating. ## What we learned At first, we wanted to integrate hardware into our project as we initially ideated an eye-tracking product that would help diagnose concussions. Due to limitations with our PC's specifications, we had to pivot to a different type of problem that affects the brain, Alzheimer's disease. We learned that it's important to first identify our technical abilities before attempting to use hardware that's completely new to us! ## What's next for RECOLLECT RECOLLECT's simple user interface would benefit from even more features for users. We could conduct user research on what pain points are specifically relevant to those that have Alzheimer's disease, and the guardians that take care of them.
winning
## Inspiration We were trying for an IM cross MS paint experience, and we think it looks like that. ## What it does Users can create conversations with other users by putting a list of comma-separated usernames in the To field. ## How we built it We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database. ## Challenges we ran into Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging. ## Accomplishments that we're proud of Getting a (mostly) fully up-and-running chat client up in 24 hours! ## What we learned We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app. ## What's next for SketchWave We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration We love spending time playing role based games as well as chatting with AI, so we figured a great app idea would be to combine the two. ## What it does Creates a fun and interactive AI powered story game where you control the story and the AI continues it for as long as you want to play. If you ever don't like where the story is going, simply double click the last point you want to travel back to and restart from there! (Just like in Groundhog Day) ## How we built it We used Reflex as the full-stack Python framework to develop an aesthetic frontend as well as a robust backend. We implemented 2 of TogetherAI's models to add the main functionality of our web application. ## Challenges we ran into From the beginning, we were unsure of the best tech stack to use since it was most members' first hackathon. After settling on using Reflex, there were various bugs that we were able to resolve by collaborating with the Reflex co-founder and employee on site. ## Accomplishments that we're proud of All our members are inexperienced in UI/UX and frontend design, especially when using an unfamiliar framework. However, we were able to figure it out by reading the documentation and peer programming. We were also proud of optimizing all our background processes by using Reflex's asynchronous background tasks, which sped up our website API calls and overall created a much better user experience. ## What we learned We learned an entirely new but very interesting tech stack, since we had never even heard of using Python as a frontend language. We also learned about the value and struggles that go into creating a user friendly web app we were happy with in such a short amount of time. ## What's next for Groundhog More features are in planning, such as allowing multiple users to connect across the internet and roleplay on a single story as different characters. We hope to continue optimizing the speeds of our background processes in order to make the user experience seamless.
winning
## Inspiration I love videogames. There are so many things that we can't do in the real world because we are limited to the laws of physics. There are so many scenarios that would be too horrible to put ourselves in if it were the real world. But in the virtual world of videogames, you can make the impossible happen quite easily. But beyond that, they're just fun! Who doesn't enjoy some stress-relief from working hard at school to go and game with your friends? Especially now with COVID restrictions, videogames are a way for people to be interconnected and to have fun with each other without worrying about catching a deadly disease. ## What it does The Streets of Edith Finch is a first-person shooter, battle royale style game built with the impressive graphics of Unreal Engine 4. Players are spawned into the unique level design where they can duke it out to be the last man/woman standing. ## How I built it Using Unreal Engine 4 to simulate the physics and effects and develop the frameworks for actors. Textures are community-based from the Epic Games Community. Functionality, modes, and game rules were built in C++ and Blueprints (Kesmit) and developed directly in the engine's source code. ## Challenges I ran into Unreal Engine has A LOT of modules and classes so navigation was definitely not easy especially since this my first time working with it. Furthermore, Unreal engine introduces a lot of Unreal specific syntaxes that do not follow traditional C++ syntax so that was also a learning curve. Furthermore, simulating the physics behind ragdolls and pushing over certain entities was also difficult to adjust. ## Accomplishments that I'm proud of The fact that this is actually playable! Was not expecting the game to work out as well as it did given the limited experience and lack of manpower being a solo group. ## What I learned I learned that game development on it's own is a whole other beast. The coding is merely a component of it. I had to consider textures and shadow rendering, animations, physics, and playability all on top of managing module cohesion and information hiding in the actual code. ## What's next for The Streets of Edith Finch Make level design much larger - not enough time this time around. This will allow for support for more players (level is small so only about 2-3 players before it gets too hectic). Furthermore, spawn points need to be fixed as some players will spawn at same point. Crouching and sprinting animations need to be implemented as well as ADSing. Finally, player models are currently missing textures as I couldn't find any good ones in the community right now that weren't >$100 lol.
## Inspiration Reflecting on 2020, we were challenged with a lot of new experiences, such as online school. Hearing a lot of stories from our friends, as well as our own experiences, doing everything from home can be very distracting. Looking at a computer screen for such a long period of time can be difficult for many as well, and ultimately it's hard to maintain a consistent level of motivation. We wanted to create an application that helped to increase productivity through incentives. ## What it does Our project is a functional to-do list application that also serves as a 5v5 multiplayer game. Players create a todo list of their own, and each completed task grants "todo points" that they can allocate towards their attributes (physical attack, physical defense, special attack, special defense, speed). However, tasks that are not completed serve as a punishment by reducing todo points. Once everyone is ready, the team of 5 will be matched up against another team of 5 with a preview of everyone's stats. Clicking "Start Game" will run the stats through our algorithm that will determine a winner based on whichever team does more damage as a whole. While the game is extremely simple, it is effective in that players aren't distracted by the game itself because they would only need to spend a few minutes on the application. Furthermore, a team-based situation also provides incentive as you don't want to be the "slacker". ## How we built it We used the Django framework, as it is our second time using it and we wanted to gain some additional practice. Therefore, the languages we used were Python for the backend, HTML and CSS for the frontend, as well as some SCSS. ## Challenges we ran into As we all worked on different parts of the app, it was a challenge linking everything together. We also wanted to add many things to the game, such as additional in-game rewards, but unfortunately didn't have enough time to implement those. ## Accomplishments that we're proud of As it is only our second hackathon, we're proud that we could create something fully functioning that connects many different parts together. We spent a good amount of time on the UI as well, so we're pretty proud of that. Finally, creating a game is something that was all outside of our comfort zone, so while our game is extremely simple, we're glad to see that it works. ## What we learned We learned that game design is hard. It's hard to create an algorithm that is truly balanced (there's probably a way to figure out in our game which stat is by far the best to invest in), and we had doubts about how our application would do if we actually released it, if people would be inclined to play it or not. ## What's next for Battle To-Do Firstly, we would look to create the registration functionality, so that player data can be generated. After that, we would look at improving the overall styling of the application. Finally, we would revisit game design - looking at how to improve the algorithm to make it more balanced, adding in-game rewards for more incentive for players to play, and looking at ways to add complexity. For example, we would look at implementing a feature where tasks that are not completed within a certain time frame leads to a reduction of todo points.
## Inspiration We wanted to improve even more on the immersiveness of VR by using 3D maps based on the real world. We also wanted to demonstrate the power of AR as a medium for interactive and collaborative gaming. We also wanted to connect the physical world with the real world, so a player moves through the game by jogging in place. ## What it does Our game allows 4 players, divided into teams of 2, to run through the streets of Paris looking for a goal destination. On each team, one player is immersed at the street level through VR, while his or her comrade views the entire world map as an AR overlay on a surface. The navigator must help their teammate search through the streets of an unfamiliar city while trying to get to the destination before the other team. The faster the player on VR jogs in place, the faster they move through the VR world. ## How we built it * We worked on developing and rendering real-world meshes from GoogleStreetAPI. This was done through OpenFramework for visual rendering scene reconstruction. We also used Meshlab and Blender to generate these 3D scenes. We ran SLAM algorithms to create 3D scenes from 2D panoramas. * One member worked on exploring hardware options to connect the physical and real world. He used Apple's CoreMotion framework and applied signal processing techniques to turn accelerometer data from the iPhone in the Google Cardboard into accurate estimates of jogging speed. * One of the members developed and synchronized the AR/VR world with the 4 players. He used ARKit and SceneKit to create the VR world and tabletop AR world overlay. He used firebase to synchronize the VR user's location with the player icon on the AR user's bird's eye view. ## Challenges we ran into A substantial amount of our time was spent trying to stitch together our own 3D renderings from Google StreetView panoramas. Ultimately, we had to download and import existing object files from the internet. Another huge challenge was synchronizing all four players in real time in a shared AR/VR world. Further, while we were able to use Fast Fourier Transforms to get extremely accurate estimates of our jogging speed when processing the CoreMotion data in Matlab, implementing this in Swift proved much more difficult, so we built a simpler (but still fairly accurate) estimation script which does not transform the data to a frequency domain. ## Accomplishments that we're proud of Making a game that we would play ourselves and that we think others would too. We are able to create an immersive experience for four users and the players get some exercise too while they're playing! ## What we learned We should have eliminated some of the dead-ends that we found ourselves stuck in over the course of the hackathon by checking out more APIs beforehand. ## What's next for MazerunnerAR World dominance. Extending our game to work in any location around the world where there is Google Streetview data. The game will have unlimited maps, players would just need to pick an area from Google Earth and they would be able to play the game with their friends in AR VR.
partial
## Inspiration Communication is so important in today's world. Therefore, it is unfair that it may not be accessible to some parts of the population. We wanted to provide an easy solution in order to empower these individuals and build an inclusive environment. ## What it does and how we built it We used the Symphonic Lab’s voiceless API in order to interpret lip syncing movements into text/transcripts for people with speech impairments, which can be visualized on an application like google meet through closed captions. Once transcribed, we used google translate’s text-to-speech function to convert that text into speech, so that others can hear the intended words. ## Challenges we ran into We ran into a couple of challenges when developing the project. Firstly, there were bugs in the Symphonic API which slowed down our progress. However, we were able to overcome this challenge, with the help of our wonderful mentors, and create a working prototype. ## Accomplishments that we're proud of Despite multiple technical errors, we persevered through our project and successfully came up with an MVP. We collaborated effectively under time constraints and integrated feedback from mentors to constantly improve the code. ## What we learned We took so much away from this experience. Learning the tech was definitely one aspect of it, but in the process we developed other real-world skills such as critical thinking, problem-solving, building user-centric design, collaborating and so much more! ## What's next for VoiScribe In the future, we plan to make it capable of processing live feed. We also plan to we plan to incorporate a sign language predictor that can detect sign language when lip-sync to speech fails. Lastly, we plan to make it a chrome extension so that it is easily accessible to the public!
## 💡 Inspiration The ongoing emergency service and the healthcare crisis in Canada at this time have been contributing issues to how our project came to be. We have seen reports of hospitals being short-staffed and exhausted and decided to attempt to mitigate some of the stress. Knowing emergency first aid can save many lives, yet only 18% of Canadians are currently certified. Seeing that our proposed idea could truly make an impact, we embarked on a 36-hour journey to create AnytimeAid: a project that provides help with just a few touches. ## 📱 What It Does We have built and designed a simple yet powerful application that fits all your medical emergency needs into your mobile phone. Simply taking a photo of an injury at hand will yield step-by-step instructions within a matter of seconds. The simple user interface allows for easy navigation of all the features. Additional features include a map, which points out nearby clinics and hospitals, as well as the current location of the user; and an emergency contact panel; which has 911 emergency on quick dial and all other contacts the user has put down including their name, phone number, and address. ## ⚙ How We Built It We design the UI/UX on Figma, creating a clean and appealing interface with bright colours. The frontend of the app is built using React Native with Expo and the backend is built using Flask, Selenium with Python, and MongoDB as the database. We used the Expo MapView package to build the map page using functions such as reverse geocoding and getting the location of the user to create the functionality. Selenium was used with Expo Camera to process images for web scrapping through reverse image search. The entire project is supported with Flask and MongoDB in the backend. ## 🗻 Challenges We Ran Into During the process of creating this project, there were several challenges we faced. One of these major challenges involved integrating our frontend with our backend and handling requests to ensure a functional product. Additionally, we were quite ambitious in what we wanted the application to do, leading us to underestimate the complexity of the project and the amount of time it would take, which led us to have to adapt our plan as we were developing the application. ## 🏆 Accomplishments That We're Proud Of As our project was quite complex and required many different components, we were proud of the progress we were able to make in the time that we had and our ability to integrate many different technologies, such as React Native, Flask, Selenium, Mongo DB within our project. In the past, we mostly worked on implementing smaller projects with these technologies so to be able to apply past knowledge and also apply new skills and learnings into creating this project that was meaningful to us was rewarding. ## 🧠 What We Learned We learned many new technologies such as Selenium, Figma, and React Native. Most importantly, we learned to not take on too much or overestimate our abilities. Although we had a great idea that we were excited to put into code, it didn't take long for us to realize we took on more than we could handle and as a result, there are many bugs still in the app. ## 😤 What's Next For AnytimeAid We would like to reach out to sponsors before beginning to run trial tests. These trial tests will start with online photos of injuries and measure the accuracy of the results. We will then move on to trial within select provinces in Canada before expanding further throughout Canada and potentially worldwide. We will be constantly improving and updating our database and system and we hope to bring aid to those in need, anytime, anywhere, anyplace.
## Inspiration We are Creating FriendLens to simplify sharing and reliving memories through automated photo sharing. The hustle of manually selecting photos from the gallery when anyone asks you to send them is tiring, and long, we made FriendLens effortless and seamless. ## What it does FriendLens does the following: * It uses facial detection to identify people in photos and automatically share them with the right contacts - detects any motion blur and overexposure, removing any pics that contain such * It allows you to select a time period of photos you want to send (for example yesterday night's party or last week's New York trip) Some other cool features * The app integrates with all the social media platforms, and it converts some goofy photos into stickers to use in social media * The app’s AI selects the best photo of the night and saves it as a core memory * Event albums, celebration reminders, photo enhancement, themed albums(parties, vacations, etc.), and many more. ## How we built it We built FriendLens as an Android app using Flutter and Dart for the UI and integrated a Python-based facial detection ML algorithm for the back end. We used Google Cloud for deployment. ## Challenges we ran into Integrating the Python algorithm into Flutter, handling compatibility issues with packages, and exploring alternative solutions for seamless integration. ## Accomplishments that we're proud of Successfully creating an MVP of FriendLens within a tight timeframe and learning about app development, AI integration, and teamwork. ## What we learned We gained experience in Flutter app development, Python integration, cloud deployment, and the importance of adapting to challenges. ## What's next for FriendLens Exploring more advanced facial recognition techniques, refining the user experience, and expanding features like personalized photo recommendations and smart album creation.
losing
## Inspiration As a team, we wanted to help promote better workplace relationships and connections--especially since the workplace has been virtual for so long. To achieve this, we created a slack bot to increase the engagement between coworkers through various activities or experiences that can be shared among each other. This will allow team members to form more meaningful relationships which create a better work environment overall. Slack is a great tool for an online work platform and can be used better to increase social connections while also being effective for communicating goals for a team. ## What it does When the Slackbot is added to a workplace on slack, the team is capable of accessing various commands. The commands are used with the / prefix. Some of the commands include: * `help`: To get help * `game-help` : To get game help * `game-skribble` : Provides a link to a skribble.io game * `game-chess` : Provides a link to a chess game * `game-codenames` : Provides a link to a codenames game * `game-uno` : Provides a link to a UNO game * `game-monopoly` : Provides a link to a COVID-19 themed monopoly game * `memes` :Generates randomized memes (for conversation starters with colleagues) * `virtual-escape` : Allows users to see art, monuments, museums, etc (randomized QR codes) through augments reality for a short "virtual escape" * `mystery`: Displays a mystery of the month for colleagues to enjoy and talk about amongst each other We also have scheduled messages that we incorporated to help automate some processes in the workplace such as: * `birthday reminders` : Used to send an automated message to the channel when it is someone's birthday * `mystery` : Sends out the mystery of the month at beginning of each month These features on our bot promote engagement between people with the various activities possible. ## How we built it The Bot was designed using the SlackAPI and programmed using python. Using the SlackAPI commands the bot was programmed to increase social interaction and incite more conversations during the Covid-19 remote working regulation. The bot has a variety of commands to help workers engage in conversations with their fellow colleagues. With a wide selection of virtual games to be played during lunch breaks or perhaps take a short relaxing break using the EchoAR generated QR Codes to devel into the world of augmented reality and admire or play with what they are as shown. Workers are also able to request a singular meme from the bot and enjoy a short laugh or spark a funny conversation with others who also see the meme. Overall, the bot was built with the mindset of further enhancing social communication in a fun manner to help those working remotely feel more at ease. ## Challenges we ran into This is the first time anyone from the team had worked with Slack and SlackAPI. There were alot of unfamiliar components and the project as a whole was a big learning experience for the team members. A few minor complications we had run into were mostly working out syntax and debugging code which no errors yet didn't display output properly. Overall, the team enjoyed working with Slack and creating our very first slack bot. ## Accomplishments that we're proud of We are proud of all the work that was accomplished in one day! The bot was an interesting challenge for us that can be used throughout our daily lives as well (in our work or team clubs slack workplaces). Some commands we are especially proud of are: * *virtual-escape* since it was able to incorporate echoAR technology. This was something new to us that we were able to successfully integrate into our bot and it has a huge impact on making our bot unique! * *memes* as we were able to figure out a successful way to randomize the images -*scheduled messages* since they allowed us to branch into making more dynamic bots Overall, we are very proud of all the work that was accomplished and the bot that we created after the countless hours on Stack Overflow! :) ## What we learned We became more familiar with using python in development projects and learned more about creating bots for different applications. We learned a lot of new things through trial and error as well as various youtube videos. As this was our first slack bot, the entire development process was new and challenging but we were able to create a working bot in the end. We want to continue developing bots in the future as there is so much that can be accomplished with them--we barely scratched the surface! ## What's next for Slackbot The Future of slackbot looks bright. With new ideas emerging daily the team looks forward to building upon the social attraction of the bot and incorporating more work intensive themes with perhaps a file manager or an icebreakers for newer employees. Our SlackBot is no where near its end and with enough time it could become a truly wonderful AI for remote workers.
[Check out the GitHub repo.](https://github.com/victorzshi/better) ## Inspiration In our friend group, we often like to bet against each other on goals for motivation and fun. We realized that in many situations, exposing personal milestones to a group of friends can provide great social encouragement and strengthen bonds within a community. ## What it does Our app provides a Slack bot and web-app interface for co-workers or friends to share and get involved with each others' goals. ## How we built it We built the core of our app on the Standard Library platform, which allowed us to quickly develop our serverless Slack bot implementation. ## Challenges we ran into Getting to understand Standard Library and its unique features was definitely the steepest part of the learning curve this weekend. We had to make changes due to play to certain weaknesses and strengths of the platform. ## Accomplishments that we're proud of We are proud that we were able to finish a functioning Slack bot, as well as present a pleasing website interface. ## What we learned We learned how to better take different types of user input/interactions into mind when designing an application, as it was the first time most of our team had developed a bot before. ## What's next for Better There are definitely many directions we could go with Better in the future. As a Slack bot, this prototype acts as an entry point with an HR-focused application. However, that is just the beginning. We could eventually spin-off Better into a standalone app, or integrate robust and convenient payment solutions (such as options to donate to charity or other places). We could also build this idea into a sustainable business, with percentage cuts of the money pool in mind.
# Making the mundane interesting! During the COVID-19 pandemic and the need to stay away from our fellow friends for the safety of our community, going outdoors has been one of the priorities to avoid cabin fever. However, as this quarantine stretches longer into the future, a few of us are getting bored of seeing the same parks and the same trees each time. What if there was a way to discover new parts of our city of London without aimlessly walking around - perhaps getting lost? ## Why a walking app? We love our pets; and our app shows that there may be a bias for our furry friends. We wanted to make an app that fellow furry friend lovers could use during this period. Sometimes, when going on walks, we often see the same corner, the same tree, the same park, so on and so forth. We'd imagine that our furry friends may get bored too, and may want to see a new fire hydrant, new tree to pee on, or something new to just smell and do dog things with. ## How it works Our app takes in JSON data from [OpenData](https://opendata.london.ca), from the city of London, and aggregates them into a map showing these features as points of interest. Answering a quick personality test of the kind of walker you would be, such as * Do you have pets? * Do you like water? Would then recommend routes based on how you answered. Those with pets would be recommended routes that bias towards fire hydrants, trees, parks, and green spaces. Those who wanted water would have a larger bias towards landmarks with bias to water features like water bodies or water streams. ## Our Frontend We used figma to create the user experience. The general user experience has been sketched out below in a fun visual. ![img](https://i.ibb.co/yVssz66/frontend-idea.png) What was created was a mockup of a frontend that is meant to be used on a mobile device to have a seamless experience in our connected world. ## Our Backend Google Cloud has provided hosting services for a Postgres SQL server which holds all of the data from OpenData but is able to be served to a scaleable amount of users without putting strain onto OpenData or the servers of the City of London. Current implementation of the backend is a data visualization of some of the points in a given circumference of a starting point. The user can create paths based on points of interest in that area. A machine learning aspect is to classify users based on how they would answer the personality questions. Attached are the [Profiles](https://docs.google.com/spreadsheets/d/1JJ9xPv4ixepKdF407_7O7y5pQClbC7J9XcJ-kNOg66k/edit#gid=0) we would expect a user would fall into. Based on how the user answers, a trained Logistic Regression Classifier using cross-entropy loss for multiple classes would predict which class the user falls into, and then provide a recommender bias towards one of the preset personalities. ## Trials and Tribulations About 5 hours of effort was given to try using DataStax as the database as a service. Much of that time was spent trying to upload JSON data into the GCP back-end using dsbulk. Needless to say, it was terribly difficult even for a professional who uses SQL in their professional life. We learned that there was a bigger difference between NoSQL and SQL than previously thoughts. Some of us came into this hackathon with no front end programming experience, and rapidly learned about some new tools to develop attractive and easy front end user interactions. We learned about figma, a front end clickable pro-typing app designer which requires no coding experience. In exploring ways to plot paths, we discovered [OSMNX](https://github.com/gboeing/osmnx), a python API to retrieve, model, analyze, and visualize OpenStreetMap street networks and other spatial data. We have been able to use OSMNX to display data such as the street network: ![img](https://i.ibb.co/Z8Ykdpz/city-of-london-streets.png) We have been able to do preliminary point to point path-finding on the app. ![img](https://i.ibb.co/9Z8j6mB/path-example.png) ## Next Steps Currently, it would be to: * Integrate the Database as a Service from Google Cloud to our application * Add more than one way-point to our current path-finding system * Move deployment of the application from a proof of concept Jupyter Notebook to something more user interactive, such as Streamlit, Flask, or another quick front end experience * Implement a simple Logistic Regression to classify users based on how they answered questions to better recommend
partial
## Inspiration VibeCare was inspired by our team's common desire for a sense of health, well-being, and general good feelings among young people today - Vibes! ## What it does VibeCare provides a unique twist on a health app by incentivizing users to log their habits through the implementation of game design mechanics. Logging your habits rewards you with CareCoins, which you can use to buy Vibes, small pets that keep you company and support the progress of your healthy lifestyle! ## How we built it We built VibeCare using Processing, a Java-based IDE that provides tools for developing graphics and visual structure. Our development process started off with brainstorming ideas until we decided we wanted to focus on a health hack that could improve people's well-being. From there, we decided on Processing as our development platform based on our team's strengths and experience, and planned out the structure of our project through storyboarding and lots of whiteboard drawings! ## Challenges we ran into One of the main challenges that we ran into from the start of the project was learning how to effectively use Processing to implement the ideas in our project, as most of our team was not experienced in using the software. Another big challenge keeping the project focused on the original goal of health and well-being, trying to prevent scope creep off of our main idea. ## Accomplishments that we're proud of We are all extremely proud that we were able to design an application that incorporates both a strong focus on improving health and well being, as well as a strong aesthetic and value on audience appeal. ## What we learned Throughout the development of VibeCare, we learned a lot about aspects of product design, teamwork, and how to work through a large project through the delegation of various tasks. ## What's next for VibeCare We designed VibeCare with scalability and expansion in mind. In the future, VibeCare could expand to different platforms such as web and iOS to make the app available to as many people as possible. Other ways of interacting with the app, like voice input, and personalized features, such as colour-blind accommodation, could contribute to increased accessibility. VibeCare has great potential for content expansion as well, such as tracking a user's health statistics, and implementing a tracker for environmental impact.
## Inspiration Moodivation was inspired by our shared interest in mental wellness and the different ways we connect with it. After thoughtful discussions and analysis, we realized this app could make a meaningful impact. ## What it does The app features a streak system, rewarding users for logging their moods, completing wellness activities, and sharing positive messages with friends. ## How we built it We started with a prototype design, then built the app using various technologies, including Python, Firebase, React, and TypeScript. ## Challenges we ran into We successfully trained and tested a Sentiment Analysis AI model in MATLAB but faced difficulties deploying it to the web app. ## Accomplishments that we're proud of we’re proud of our teamwork, creativity, and what we achieved in such a short time. ## What we learned We learned new technologies and improved our communication and collaboration skills ## What's next for Moodivation Next, we plan to integrate the AI model and enhance the app’s features.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
losing
## 💡 Inspiration 💡 The history of art is rich, deep, and underappreciated. On the other hand, AI generated art has exploded in recent years and the technology has never been more relevant. What better way to learn about art than to play a casual web game involving AI art against your friends? ## ✨ What it does ✨ Welcome to Forge-It!, a real-time multiplayer party game where you and a group of art forgers work to imitate famous artists’ styles to sneak forged art past the detectives! Armed with a state-of-the-art image generation model, knowledge about an artist and art style, as well as one piece you’ve managed to get your hands on, your aim as a forger is to craft AI generated art that can fool detectives with access to the artist’s portfolio. On the other hand, as a detective, your goal is to root out the "forged" art and stop the forgers! Forge-It! is a web application that allows many people to play against each other on two teams, the forgers and the detectives. Every round, the forgers generate images by entering prompts that get sent to a Stable Diffusion model which turn descriptions of images into reality. The forgers also vote on which art pieces they think are the best imitations, and the top 3 along with one true art piece by the artist are presented to the detectives, who try to select the real one. The artists take advantage of descriptions of art movements and the artist which are given to them, and the detectives take advantage of 3 art pieces of the artist along with the descriptions. ## ⚙️ How we built it ⚙️ We used Websocket with Node.js to create our server, which is responsible for broadcasting and handling user events in the back end. We used a Flask server for handling parsing Wikipedia pages for artist and art style information and for accessing our dataset of tens of thousands of art pieces, and another Flask server for accessing the Stable Diffusion model. We hosted the model on Google Cloud and used an Nvidia A100 GPU to generate images. In the front end, we used React to build an aesthetically pleasing web app. ## 🔥 Challenges we ran into 🔥 There were a couple challenges with hosting the Stable Diffusion model, as well as building real time communication with Websocket. We also had to do a lot of data wrangling when cleaning our art dataset and when scraping Wikipedia. ## 💪 Accomplishments that we're proud of 💪 We're proud of creating an application that is not only completely functional, but also is extremely clean, aesthetically pleasing and intuitive to use. We're also proud of how well we worked together as a team. ## 📚 What we learned 📚 We learned a lot about how to communicate between multiple users and the server with Websocket, and also generally how best to approach full stack development. ## 📈 What's next for Forge-It!📈 We want to make a full points system, where forgers get points for getting votes on art they generated and points for having their art mistakenly chosen by detectives. Also, we want to implement more ways for the detectives and forgers to interact, such as detectives being able to intercept parts of forger prompts.
## Inspiration Have you ever wondered if your outfit looks good on you? Have you ever wished you did not have to spend so much time trying on your whole closet, taking a photo of yourself and sending it to your friends for some advice? Have you ever wished you had worn a jacket because it was much windier than you thought? Then MIR will be your new best friend - all problems solved! ## What it does Stand in front of your mirror. Then ask Alexa for fashion advice. A photo of your outfit will be taken, then analyzed to detect your clothing articles, including their types, colors, and logo (bonus point if you are wearing a YHack t-shirt!). MIR will simply let you know if your outfits look great, or if there are something even better in your closet. Examples of things that MIR takes into account include types and colors of the outfit, current weather, logos, etc. ## How I built it ### Frontend React Native app for the smart mirror display. Amazon Lambda for controlling an Amazon Echo to process voice commands. ### Backend Google Cloud Vision for identifying features and colors on a photo. Microsoft Cognitive Services for detecting faces and estimating where clothing would be. Scipy for template matching. Forecast.io for weather information. Runs on Flask on Amazon EC2. ## Challenges I ran into * Determining a good way to isolate clothing in an image - vision networks get distracted by things easily. * React Native is amazing when it does work, but is just a pain when it doesn't. * Our original method of using Google's Reverse Image Search for matching logos did not work as consistently. ## Accomplishments that I'm proud of It works! ## What I learned It can be done! ## What's next for MIR MIR can be further developed and used in many different ways! ## Another video demo: <https://youtu.be/CwQPjmIiaMQ>
## Inspiration As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity. ## What it does "The Mask" introduces a revolutionary approach to differentiate between AI-generated images and real-world photos. By integrating a masking layer during the propagation step of stable diffusion, it embeds a unique hash. This hash is directly obtained from the Solana blockchain, acting as a verifiable seal of authenticity. Whenever someone encounters an image, they can instantly verify its origin: whether it's an AI creation or an authentic capture from the real world. ## How we built it Our team began with an in-depth study of the stable diffusion mechanism, pinpointing the most effective point to integrate the masking layer. We then collaborated with blockchain experts to harness Solana's robust infrastructure, ensuring seamless and secure hash integration. Through iterative testing and refining, we combined these components into a cohesive, reliable system. ## Challenges we ran into Melding the complex world of blockchain with the intricacies of stable diffusion was no small feat. We faced hurdles in ensuring the hash's non-intrusiveness, so it didn't distort the image. Achieving real-time hash retrieval and embedding while maintaining system efficiency was another significant challenge. As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity. ## Accomplishments that we're proud of Successfully integrating a seamless masking layer that does not compromise image quality. Achieving instantaneous hash retrieval from Solana, ensuring real-time verification. Pioneering a solution that addresses a pressing concern in the AI and digital era. Garnering interest from major digital platforms for potential integration. ## What we learned The journey taught us the importance of interdisciplinary collaboration. Bringing together experts in AI, image processing, and blockchain was crucial. We also discovered the potential of blockchain beyond cryptocurrency, especially in preserving digital integrity.\ ## What's next for The Mask We envision "The Mask" as the future gold standard for digital content verification. We're in talks with online platforms and content creators to integrate our solution. Furthermore, we're exploring the potential to expand beyond images, offering verification solutions for videos, audio, and other digital content forms.
losing
## Inspiration We want a more sustainable way for individuals to travel from point A to point B, by sharing a ride. ## What it does It allows users to post their routes to a carpool request board for other users (drivers) to see and potentially accept. We decided that it should be aimed at students because of security reasons and to build a more tight-knit user base to begin with. ## How we built it We brainstormed to create our Figma prototype and then started building the front-end. Once we were done that we started on the backend connections. ## Challenges we ran into Connecting the backend and the frontend ## What if we had more time? We wanted to add a chat system so that the users can communicate within the app. Allow users to filter the requests page based on the current location area.
## Inspiration Every college student has used a Ride Sharing at some point in their lives. The issue with Ride Sharing APPs are often the price but the inevitability of using the APP is unavoidable. What if we could help not just students, but everyone to save a little bit of money while also making a more efficient way to compare prices between different Ride Sharing APPs. ## What it does RideCast is a downloadable Google Maps plug-in that pulls the APIs of Uber, Lyft, and Curb's APIs to utilize their database to analyze and create trends that can be implemented into predicting the future prices of car rides which also takes into account distance, weather and time. Another perk of RideCast is that it shows the comparisons in prices among the large RideSharing APPs, and will automatically show the cheapest price on top. ## Challenges we ran into One of the main challenges was that Lyft had a very secure API and must go through a long process to be approved to view the Data they have. Due to this we recorded data by hand every 30 minutes to check price changes over 16 hours. In the future once we are fully approved by Lyft it will be a much easier process to predict future trends. Another challenge that we faced was that it took a long time to decide on an idea. Our original idea on tracking heat and building a more heat resistant areas in the Cambridge area was not as viable as we once thought. ## Accomplishments that we're proud of As a group that most likely has less experience in coding compared to others, we spent most of the weekend attending workshops and watching YouTube videos in order to learn the coding we wanted to utilize. We are also very proud to have been able to become great friends over the weekend, it isn't often that people can become so close during such a small amount of time and, especially being able to work together without major arguments. We are all so glad to have met each other and hope that even after HackHarvard we can still stay in touch. ## What we learned A better world is about taking action and trying to implement it. The importance of consistency, persistence, and trusting in your vision are key abilities to bringing a better ideas into real life. Furthermore, we learned about how to proceed when we are facing bottlenecks. How to convince and encourage each other to make it through the goal. Tech wise, some of us has never previously used Figma, and never trained models. We are picking up a lot of new information and skills that will be invaluable to our project, and future lives. ## What's next for RideCast To create another model that predicts the probability of getting a driver at a certain time and location. Imagine you are in the middle of Michigan, you have a 5AM shift and you need a ride at 4AM. You reserved an Uber but still there was no driver; or you have an Uber, but the reserved price is like 10 plus dollars than you directly call it at the moment. We are also going to let the Google Maps Plug-In capable of requesting rides with APIs and without redirecting the users into the apps. It is also foreseeable to add features like directly requesting a ride by least wait time, price, comfort, etc, across all the platforms.
## Inspiration As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities. ## What it does FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless. ## How we built it We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's. ## Challenges we ran into IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url. ## Accomplishments that we're proud of We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement. ## What's next for FaceLyft While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases.
losing
## Inspiration Assistive Tech was our asigned track, we had done it before and knew we could innovate with cool ideas. ## What it does It adds a camera and sensors which instruct a pair of motors that will lightly pull the user in a direction to avoid a collision with an obstacle. ## How we built it We used a camera pod for the stick, on which we mounted the camera and sensor. At the end of the cane we joined a chasis with the motors and controller. ## Challenges we ran into We had never used a voice command system, paired with a raspberry pi and also an arduino, combining all of that was a real challenge for us. ## Accomplishments that we're proud of Physically completing the cane and also making it look pretty, many of our past projects have wires everywhere and some stuff isn't properly mounted. ## What we learned We learned to use Dialog Flow and how to prototype in a foreign country where we didn't know where to buy stuff lol. ## What's next for CaneAssist As usual, all our projects will most likely be fully completed in a later date. And hopefully get to be a real product that can help people out.
## EyeSnap: Diabetic Retinopathy Detection with DiaScan - HackHarvardFall2023 ### **Introduction** Welcome to the future of healthcare, where your smartphone becomes your guardian for early diabetes detection. Our innovative solution harnesses the power of artificial intelligence to identify Diabetic Retinopathy right from the convenience of your smartphone. We are on a mission to make early diabetes diagnosis accessible and effortless for everyone. ### **Diabetic Retinopathy** About Diabetic Retinopathy Diabetic retinopathy is a serious eye condition that affects individuals with diabetes. It can lead to vision problems and even blindness if left untreated. This single-page website aims to provide a clear and concise explanation of diabetic retinopathy. ### **Key Components** #### **Retinal AI** Retinal AI is the CNN Trained Model that can detect the presence and severity of Diabetic Retinopathy. It forms the heart of our solution, providing accurate and reliable results. ![](https://github.com/kartikey-onlineGOD/EyeSnap-HackHarvard/blob/main/Assets/image%203.png) #### **DiaScan** DiaScan is a revolutionary 3D printed device that helps us scan the retina using a phone camera. The DiaScan fits around the phone like a case and allows us to take retinal scan photographs, enabling easy and non-invasive detection of diabetic retinopathy. ![](https://github.com/kartikey-onlineGOD/EyeSnap-HackHarvard/blob/main/Assets/image%202.png) ### **About Us** Author : Kartikey Pandey As driven Computer Science student located in Pennsylvania with a strong interest in AI and Full Stack development, I've made notable contributions within the Intel AI4Youth program.As both an apprentice and intern, I've been selected for my dedication, and I've played a crucial role in creating impactful solutions to tackle issues that a global scale. With this new solution I have focused in increase Health Equity and detect diabetes using Retinal Scans in patients. I believe in the power of early detection and its potential to transform healthcare. Join me in this journey towards a world where everyone can have easy access to early diabetes diagnosis and treatment. ### **Story behind this Project** I got the inspiration of this project from CareYaya was able to deeply connect with the issue as my own grandfather passed away three years ago due to Kidney Failure which was caused due to Diabetes Type 2. Listening to Neal K. Shah I was deeply moved and wanted to build a solution to this problem. ### **Contact Us** Email: [[email protected]](mailto:[email protected]) You can connect with me on Linkedin or mail me to further understand SnapEye. Together, we can make a difference in the fight against diabetic retinopathy. Your smartphone is now your guardian in the battle for early detection and prevention.
## Inspiration Peripheral nerve compression syndromes such as carpal tunnel syndrome affect approximately 1 out of every 6 adults. They are commonly caused by repetitive stress and with the recent trend of working at home due to the pandemic it has become a mounting issue more individuals will need to address. There exist several different types of exercises to help prevent these syndromes, in fact studies show that 71.2% of patients who did not perform these exercises had to later undergo surgery due to their condition. It should also be noted that doing these exercises wrong could cause permanent injury to the hand as well. ## What it does That is why we decided to create the “Helping Hand”, providing exercises for a user to perform and using a machine learning model to recognize each successful try. We implemented flex sensors and an IMU on a glove to track the movement and position of the user's hand. An interactive GUI was created in Python to prompt users to perform certain hand exercises. A real time classifier is then run once the user begins the gesture to identify whether they were able to successfully recreate it. Through the application, we can track the progression of the user's hand mobility and appropriately recommend exercises to target the areas where they are lacking most. ## How we built it The flex sensors were mounted on the glove using custom-designed 3D printed holders. We used an Arduino Uno to collect all the information from the 5 flex sensors and the IMU. The Arduino Uno interfaced with our computer via a USB cable. We created a machine learning model with the use of TensorFlow and Python to classify hand gestures in real time. The user was able to interact with our program with a simple GUI made in Python. ## Challenges we ran into Hooking up 5 flex sensors and an IMU to one power supply initially caused some power issues causing the IMU not to function/give inaccurate readings. We were able to rectify the problem and add pull-up resistors as necessary. There were also various issues with the data collection such as gyroscopic drift in the IMU readings. Another challenge was the need to effectively collect large datasets for the model which prompted us to create clever Python scripts to facilitate this process. ## Accomplishments that we're proud of Accomplishments we are proud of include, designing and 3D printing custom holders for the flex sensors and integrating both the IMU and flex sensors to collect data simultaneously on the glove. It was also our first time collecting real datasets and using TensorFlow to train a machine learning classifier model. ## What we learned We learned how to collect real-time data from sensors and create various scripts to process the data. We also learned how to set up a machine learning model including parsing the data, splitting data into training and testing sets, and validating the model. ## What's next for Helping Hand There are many improvements for Helping Hand. We would like to make Helping Hand wireless, by using an Arduino Nano which has Bluetooth capabilities as well as compatibility with Tensorflow lite. This would mean that all the classification would happen right on the device! Also, by uploading the data from the glove to a central database, it can be easily shared with your doctor. We would also like to create an app so that the user can conveniently perform these exercises anywhere, anytime. Lastly, we would like to implement an accuracy score of each gesture rather than a binary pass/fail (i.e. display a reading of how well you are able to bend your fingers/rotate your wrist when performing a particular gesture). This would allow us to more appropriately identify the weaknesses within the hand.
partial
## Inspiration I hate doing frontend styling. It is tedious and very visual which causes most coding assistants like GPT4 to not be very helpful. I built upon the work of these [UC Berkeley researchers](https://arxiv.org/abs/2405.20519) to extend their results to frontend web development. ## What it does The model receives 3 inputs: a target image for what the final web page should look, a randomly initialized program in the custom made DSL, and the rendered result of the DSL program. Then the model suggests edits to the program based on the difference in the target image and the rendered program. The model is constrained during token generation to only suggest syntactically valid edits. This process repeats until the model achieves the target image. The model is trained like a diffusion model on program space. Randomly sampled programs in the DSL have their abstract syntax trees randomly mutated, then the model learns to reverse this noise to recover the original program conditioned on the target image which is the rendered original program. Once trained, the model can be given a diagram (e.g. a Figma board) of the desired web page then the model will iteratively build the front-end until the diagram is implemented. It could likely work on less-precise specs like hand drawn sketches of a web page. ## How we built it Wrote a context free programming language and a transpiler with Lark to convert DSL programs to HTML+CSS. I began training the model in Google Colab. ## Challenges we ran into The DSL had to be structured a very particular way so that it could be randomly sampled and mutated. This made it more difficult to add functionality to the DSL because it also had to work with the sampler and mutator. Also the model trains very slowly. The main bottleneck is the dataloading process. Currently the DSL is rendered by making it into a PDF then screenshotting the PDF. This can definitely be optimized. ## What's next for StableDOM Finish training the model. Expand the DSL. Scale compute to expand model's proficiency to the rest of HTML+CSS and frontend frameworks like React.
## Inspiration We have been recently inspired by the advent of what some are calling "Large Action Models" which can use natural language to perform a given task, like play your favorite song on Spotify. Rabbit R1 has been a recent proponent of this new paradigm. ## What it does It is an AI agent that can look at the user's web browser and perform task specified by the user through natural language such as "create a new Tweet." ## How we built it We built it using Meta's Segment Anything model to get the model to understand the web browser. We also leverage OpenCLIP which is an open source multimodal model that can bridge images and text. We use the different components of the web to allow the model to decide the best actionable steps based on the user's query. ## Challenges we ran into We ran into numerous challenges. We first wanted to use decision transformers but did not have the data, compute power, or large dataset to accomplish it. We then began to figure out how to use the current open source models to piece together a working proof of concept. As an added issue, it is difficult to control inputs and output devices due to security and privacy concerns from web browsers. ## Accomplishments that we're proud of We built a model that can understand the different components of a web site and with relatively substantial accuracy determine what action to take based on a user's prompt. We had no idea how we were going to do this when we first talked about it, but we are proud of the progress we made and how feasible the solution became. ## What we learned We learned many valuable skills such as running open source models locally and gained deeper understanding for how these models work cohesively to accomplish a task. ## What's next for Project Jarvis We will be continuing this project by using a more modern architecture with decision transformers so that we can chain actions together to give the model power to perform more complex tasks. We can also integrate speech-to-text for a seamless user interface.
## Inspiration We were going to build a themed application to time portal you back to various points in the internet's history that we loved, but we found out prototyping with retro looking components is tough. Building each component takes a long time, and even longer to code. We started by automating parts of this process, kept going, and ended up focusing all our efforts on automating component construction from simple Figma prototypes. ## What it does Give the plugin a Figma frame that has a component roughly sketched out in it. Our code will parse the frame and output JSX that matches the input frame. We use semantic detection with Cohere classify on the button labels combined with deterministic algorithms on the width, height, etc. to determine whether a box is a button, input field, etc. It's like magic! Try it! ## How we built it Under the hood, the plugin is a transpiler for high level Figma designs. Similar to a C compiler compiling C code to binary, our plugin uses an abstract syntax tree like approach to parse Figma designs into html code. Figma stores all it's components (buttons, text, frames, input fields, etc) in nodes.. Nodes store properties about the component or type of element, such as height, width, absolute positions, fills, and also it's children nodes, other components that live within the parent component. Consequently, these nodes form a tree. Our algorithm starts at the root node (root of the tree), and traverses downwards. Pushing-up the generated html from the leaf nodes to the root. The base case is if the component was 'basic', one that can be represented with two or less html tags. These are our leaf nodes. Examples include buttons, body texts, headings, and input fields. To recognize whether a node was a basic component, we leveraged the power of LLM. We parsed the information stored in node given to us by Figma into English sentences, then used it to train/fine tune our classification model provided by co:here. We decided to use an ML to do this since it is more flexible to unique and new designs. For example, we were easily able to create 8 different designs of a destructive button, and it would be time-consuming relative to the length of this hackathon to come up with a deterministic algorithm. We also opted to parse the information into English sentences instead of just feeding the model raw figma node information since the LLM would have a hard time understanding data that didn't resemble a human language. At each node level in the tree, we grouped the children nodes based on a visual hierarchy. Humans do this all the time, if things are closer together, they're probably related, and we naturally group them. We achieved a similar effect by calculating the spacing between each component, then greedily grouped them based on spacing size. Components with spacings that were within a tolerance percentage of each other were grouped under one html . We also determined the alignments (cross-axis, main-axis), of these grouped children to handle designs with different combinations of orientations. Finally, the function is recursed on their children, and their converted code is pushed back up to the parent to be composited, until the root contains the code for the design. Our recursive algorithm made it so our plugin was flexible to the countless designs possible in Figma. ## Challenges we ran into We ran into three main challenges. One was calculating the spacing. Since while it was easy to just apply an algorithm to merge two components at a time (similar to mergesort), it would produce too many nested divs, and wouldn't really be useful for developers to use the created component. So we came up with our greedy algorithm. However, due to our perhaps mistaken focus on efficiency, we decided to implement a more difficult O(n) algorithm to determine spacing, where n is the number of children. This sapped a lot of time away, which could have been used for other tasks and supporting more elements. The second main challenge was with ML. We were actually using Cohere Classify wrongly, not taking semantics into account and trying to feed it raw numerical data. We eventually settled on using ML for what it was good at - semantic analysis of the label, while using deterministic algorithms to take other factors into account. Huge thanks to the Cohere team for helping us during the hackathon! Especially Sylvie - you were super helpful! We also ran into issues with theming on our demo website. To show how extensible and flexible theming could be on our components, we offered three themes - windows XP, 7, and a modern web layout. We were originally only planning to write out the code for windows XP, but extending the component systems to take themes into account was a refactor that took quite a while, and detracted from our plugin algorithm refinement. ## Accomplishments that we're proud of We honestly didn't think this would work as well as it does. We've never built a compiler before, and from learning off blog posts about parsing abstract syntax trees to implementing and debugging highly asychronous tree algorithms, I'm proud of us for learning so much and building something that is genuinely useful for us on a daily basis. ## What we learned Leetcode tree problems actually are useful, huh. ## What's next for wayback More elements! We can only currently detect buttons, text form inputs, text elments, and pictures. We want to support forms too, and automatically insert the controlling componengs (eg. useState) where necessary.
losing
## Inspiration During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before. ## What It Does Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions. ## How We Built It We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML. Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API. We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions. After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React. ## Challenges We Ran Into From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models. We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities. ## What We Learned We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time. On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work) ## What’s Next for Hack2 The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
## Inspiration Being students in a technical field, we all have to write and submit resumes and CVs on a daily basis. We wanted to incorporate multiple non-supervised machine learning algorithms to allow users to view their resumes from different lenses, all the while avoiding the bias introduced from the labeling of supervised machine learning. ## What it does The app accepts a resume in .pdf or image format as well as a prompt describing the target job. We wanted to judge the resume based on layout and content. Layout encapsulates font, color, etc., and the coordination of such features. Content encapsulates semantic clustering for relevance to the target job and preventing repeated mentions. ### Optimal Experience Selection Suppose you are applying for a job and you want to mention five experiences, but only have room for three. cv.ai will compare the experience section in your CV with the job posting's requirements and determine the three most relevant experiences you should keep. ### Text/Space Analysis Many professionals do not use the space on their resume effectively. Our text/space analysis feature determines the ratio of characters to resume space in each section of your resume and provides insights and suggestions about how you could improve your use of space. ### Word Analysis This feature analyzes each bullet point of a section and highlights areas where redundant words can be eliminated, freeing up more resume space and allowing for a cleaner representation of the user. ## How we built it We used a word-encoder TensorFlow model to provide insights about semantic similarity between two words, phrases or sentences. We created a REST API with Flask for querying the TF model. Our front end uses Angular to deliver a clean, friendly user interface. ## Challenges we ran into We are a team of two new hackers and two seasoned hackers. We ran into problems with deploying the TensorFlow model, as it was initially available only in a restricted Colab environment. To resolve this issue, we built a RESTful API that allowed us to process user data through the TensorFlow model. ## Accomplishments that we're proud of We spent a lot of time planning and defining our problem and working out the layers of abstraction that led to actual processes with a real, concrete TensorFlow model, which is arguably the hardest part of creating a useful AI application. ## What we learned * Deploy Flask as a RESTful API to GCP Kubernetes platform * Use most Google Cloud Vision services ## What's next for cv.ai We plan on adding a few more features and making cv.ai into a real web-based tool that working professionals can use to improve their resumes or CVs. Furthermore, we will extend our application to include LinkedIn analysis between a user's LinkedIn profile and a chosen job posting on LinkedIn.
winning
## Inspiration TL;DR: Cut Lines, Cut Time. With the overflowing amount of information and the limited time that we have, it is important to efficiently distribute the time and get the most out of it. With people scrolling short videos endlessly on the most popular apps such as Tiktok, Instagram, and Youtube, we thought, why not provide a similar service but for texts that can not only be fun but also productive? As a group of college students occupied with not only school but also hobbies and goals, we envisioned an app that can summarize any kind of long text effectively so that while we can get the essence of the text, we can also spend more time on other important things. Without having to ask someone to provide a TL;DR for us, we wanted to generate it ourselves in a matter of few seconds, which will help us get the big picture of the text. TL;DR is applicable anywhere, from social media such as Reddit and Messenger to Wikipedia and academic journals, that are able to pick out the most essentials in just one click. Ever on a crunch for time to read a 10-page research paper? Want to stay updated on the news but are too lazy to actually read the whole article? Got sent a box of texts from a friend and just want to know the gist of it. TL;DR: this is the app for you! ## What it does TL;DR helps summarize passages and articles into more short forms of writing, making it easier (and faster) to read on the go. ## How we built it We started by prototyping the project on Figma and discussing our vision for TL;DR. From there, we separated our unique roles within the team into NLP, frontend, and backend. We utilized a plethora of services provided by the sponsors for CalHacks, using Azure to host much of our API and CockRoachDB Serverless to seamlessly integrate persistent data on the cloud. We also utilized Vercel’s Edge network to allow our application to quickly be visited by all people across the globe. ## Web/Extension The minimalistic user interface portraying our goal of simplification provides a web interface and a handy extension accessible by a simple right click. Simply select the text, and it will instantly be shortened and stored for future use! ## Backend and connections The backend was built with Flask via Python and hosted on Microsoft Azure as an App Service. GitHub Actions were also used in this process to deploy our code from GitHub itself to Microsoft Azure. Cockroach Lab’s DB to store our user data (email, phone number, and password) and cached summaries of past TL;DR. Twilio is also used for user authentication as well as exporting a TL;DR from your laptop to your phone. We utilized Co:here’s APIs extensively, making use of the text summarization and sentiment classifier endpoints. Leveraging Beautiful Soup’s capability to extract information, these pair together to generate the output needed by our app. In addition, we went above and beyond to better the NLP landscape by allowing our users to make modifications to Co:here’s generations, which we can send to Co:here. Through this, we are empowering a community of users that help support the development of accessible ML and get their work done as well - win/win! ## Challenges we ran into Every successful project comes with its own challenges, and we sure had to overcome some bugs and obstacles along the way! First, we took our time settling on the perfect idea, as we all wanted to create something that really impacts the lives of fellow students and the general population. Although our project is “quick”, we were slow to make sure that everything was thoroughly thought through. In addition, we spent some time debugging our database connection, where a combination of user error and inexperience stumped our progress. However, with a bit of digging around and pair programming, we managed to solve all these problems and learned so much along the way! ## Accomplishments that we're proud of The integration of different APIs into one platform was a major accomplishment since the numerous code bases that were brought into play and exchanged data had to be done carefully. It did take a while but felt amazing when it all worked out. ## What we learned From this experience, we learned a lot about using new technologies, especially the APIs and servers provided by the sponsors, which helped us be creative in how we implement them in each part of our backend and analysis. We have also learned the power of collaboration and creating a better product through team synergy and combining our creativity and knowledge together. ## What's next for TL;DR We have so much in store for TL;DR! Specifically, we were looking to support generating TL;DR for youtube videos (using the captions API or GCP’s speech-to-text service). In addition, we are always striving for the best user experience possible and will find new ways to make the app more enjoyable. This includes allowing users to make more editions and moving to more platforms!
## Inspiration Sexual assault survivors are in tremendously difficult situations after being assaulted, having to sacrifice privacy and anonymity to receive basic medical, legal, and emotional support. And understanding how to proceed with one's life after being assaulted is challenging due to how scattered information on resources for these victims is for different communities, whether the victim is on an American college campus, in a foreign country, or any number of other situations. Instead of building a single solution or organizing one set of resources to help sexual assault victims everywhere, we believe a simple, community-driven solution to this problem lies in Echo. ## What it does Using Blockstack, Echo facilitates anonymized communication among sexual assault victims, legal and medical help, and local authorities to foster a supportive online community for victims. Members of this community can share their stories, advice, and support for each other knowing that they truly own their data and it is anonymous to other users, using Blockstack. Victims may also anonymously report incidents of assault on the platform as they happen, and these reports are shared with local authorities if a particular individual has been reported as an offender on the platform several times by multiple users. This incident data is also used to geographically map where in small communities sexual assault happens, to provide users of the app information on safe walking routes. ## How we built it A crucial part to feeling safe as a sexual harassment survivor stems from the ability to stay anonymous in interactions with others. Our backend is built with this key foundation in mind. We used Blockstack’s Radiks server to create a decentralized application that would keep all user’s data local to that user. By encrypting their information when storing the data, we ensure user privacy and mitigate all risks to sacrificing user data. The user owns their own data. We integrated Radiks into our Node and Express backend server and used this technology to manage our database for our app. On the frontend, we wanted to create an experience that was eager to welcome users to a safe community and to share an abundance of information to empower victims to take action. To do this, we built the frontend from React and Redux, and styling with SASS. We use blockstack’s Radiks API to gather anonymous messages in the Support Room feature. We used Twilio’s message forwarding API to ensure that victims could very easily start anonymous conversations with professionals such as healthcare providers, mental health therapists, lawyers, and other administrators who could empower them. We created an admin dashboard for police officials to supervise communities, equipped with Esri’s maps that plot where the sexual assaults happen so they can patrol areas more often. On the other pages, we aggregate online resources and research into an easy guide to provide victims the ability to take action easily. We used Azure in our backend cloud hosting with Blockstack. ## Challenges we ran into We ran into issues of time, as we had ambitious goals for our multi-functional platform. Generally, we faced the learning curve of using Blockstack’s APIs and integrating that into our application. We also ran into issues with React Router as the Express routes were being overwritten by our frontend routes. ## Accomplishments that we're proud of We had very little experience developing blockchain apps before and this gave us hands-on experience with a use-case we feel is really important. ## What we learned We learned about decentralized data apps and the importance of keeping user data private. We learned about blockchain’s application beyond just cryptocurrency. ## What's next for Echo Our hope is to get feedback from people impacted by sexual assault on how well our app can foster community, and factor this feedback into a next version of the application. We also want to build out shadowbanning, a feature to block abusive content from spammers on the app, using a trust system between users.
# Course Connection ## Inspiration College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life. ## What it does Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students. From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data. ## High-Level Tech Stack Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing). ## How we built it ### Initial Setup Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization. ### Backend Infrastructure We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes. We distributed our workload across several servers. We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end. ### Graph Construction Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it. Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1. We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken. With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet! ## Challenges we ran into We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers. In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph. ## Accomplishments that we're proud of We’re very proud of the graph component both in its data structure and in its visual representation. ## What we learned It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel. ## What's next for Course Connections There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
partial
## Inspiration Being introduced to financial strategies, many are skeptical simply because they can't imagine a significant reward for smarter spending. ## What it does * Gives you financial advice based on your financial standing (how many credit cards you have, what the limits are, whether you're married or single etc.) * Shows you a rundown of your spendings separated by category (Gas, cigarettes, lottery, food etc.) * Identifies transactions as reasonable or unnecessary ## How I built it Used React for the most part in combination with Material UI. Charting library used is Carbon Charts which is also developed by me: <https://github.com/carbon-design-system/carbon-charts> ## Challenges I ran into * AI * Identification of reasonable or unnecessary transactions * Automated advising ## Accomplishments that I'm proud of * Vibrant UI ## What I learned * Learned a lot about React router transitions * Aggregating data ## What's next for SpendWise To find a home right inside your banking application.
## Inspiration Our inspiration came from seeing how overwhelming managing finances can be, especially for students and young professionals. Many struggle to track spending, stick to budgets, and plan for the future, often due to a lack of accessible tools or financial literacy. So, we decided to build a solution that isn't just another financial app, but a tool that empowers individuals, especially students, to take control of their finances with simplicity, clarity, and efficiency. We believe that managing finances should not be a luxury or a skill learned through trial and error, but something that is accessible and intuitive for everyone ## What it does Sera simplifies financial management by providing users with an intuitive dashboard where they can track their recent transactions, bills, budgets, and overall balances - all in one place. What truly sets it apart is the personalization, AI-powered guidance that goes beyond simple tracking. Users receive actionable recommendations like "manage your budget" or "plan for retirement" based on their financial activity With features like scanning receipts via QR code and automatic budget updates, we ensure users never miss a detail. The AI chatbot, SeraAI, offers tailored financial advice and can even handle tasks like adding transactions or adjusting budgets - making complex financial decisions easy and stress-free. With a focus on accessibility, Sera makes financial literacy approachable and actionable for everyone. ## How we built it We used Next.js with TailwindCSS for a responsive, dynamic UI, leveraging server-side rendering for performance. The backend is powered by Express and Node.js, with MongoDB Atlas for scalable, secure data storage. For advanced functionality, we integrated Roboflow for OCR, enabling users to scan receipts via QR codes, automatically updating their transactions, Cerebras handles AI processing, powering SeraAI, our chatbot that offers personalized financial advice and automates various tasks on our platform. In addition, we used Tune to provide users with customized financial insights, ensuring a proactive and intuitive financial management experience ## Challenges we ran into Integrating OCR with our app posed several challenges, especially when using Cerebras for real-time processing. Achieving high accuracy was tricky due to the varying layouts and qualities of receipts, which often led to misrecognized data. Preprocessing images was essential; we had to adjust brightness and contrast to help the OCR perform better, which took considerable experimentation. Handling edge cases, like crumpled or poorly printed receipts, also required robust error-checking mechanisms to ensure accuracy. While Cerebras provided the speed we needed for real-time data extraction, we had to ensure seamless integration with our user interface. Overall, combining OCR with Cerebras added complexity but ultimately enhanced our app’s functionality and user experience. ## Accomplishments that we're proud of We’re especially proud of developing our QR OCR system, which showcases our resilience and capabilities despite challenges. Integrating OCR for real-time receipt scanning was tough, as we faced issues with accuracy and image preprocessing. By leveraging Cerebras for fast processing, we overcame initial speed limitations while ensuring a responsive user experience. This accomplishment is a testament to our problem-solving skills and teamwork, demonstrating our ability to turn obstacles into opportunities. Ultimately, it enhances our app’s functionality and empowers users to manage their finances effectively. ## What we learned We learned that financial education isn’t enough, people need ongoing support to make lasting changes. It’s not just about telling users how to budget; it’s about providing the tools, guidance, and nudges to help them stick to their goals. We also learned the value of making technology feel human and approachable, particularly when dealing with sensitive topics like money. ## What's next for Sera The next steps for Sera include expanding its capabilities to integrate with more financial platforms and further personalizing the user experience to provide everyone with guidance and support that fits their needs. Ultimately, we want Sera to be a trusted financial companion for everyone, from those just starting their financial journey to experienced users looking for better insights.
## Inspiration Financial literacy is an essential tool for teenagers and young adults but unfortunately, many lack the knowledge. This is especially important since financial habits developed during the earlier years of one's life is usually persisted through the future. We aim to provide an effective and engaging way to reach out and educate those particularly in high school/university on making healthy financial decisions. ## What it does Our financial simulator provides an entertaining and immersive experience in educating young adults on making healthy financial decisions. By reproducing the experience of earning and spending assets, we personalize the experience, providing a more effective learning potential. Intuitive and responsive data visualization tools also provide statistics on the user's financial spending. ## How we built it The data visualization tools were rendered using ChartJs along with a labelling plugin ChartJs Labels. Custom plugins were written to provide a more dynamic and interactive response. The front-end and routing of the program were done using Angular 6 and styled with Bootstrap 4. The resulting program was hosted on Heroku. ## Challenges we ran into We had a lot of trouble routing data through our front-end using Angular 6. We also found ourselves thinking a lot about game mechanics to balance the "playability" of the game with the educational aspect. ## Accomplishments that we're proud of Since our team is composed of members that predominantly work in back-end software and development, creating an application with an emphasis on beautiful and immersive UI/UX was a welcomed challenge and change of pace. We were able to create a UI that is responsive and appealing to our target market in very few iterations of the UX design. We were also able to set up and maintain a continuous delivery pipeline on a git branch to allow for rapid iterations so that all the team members can see the new product changes quickly. Lastly, we wrote our own plugins for Chartjs to allow for custom features that made data visualization much more impactful.
partial
## Inspiration Every day, students pay hundreds of dollars for textbooks that they could be getting for lower prices had they spent the time to browse different online stores. This disadvantageous situation forces many students to choose between piracy of textbooks or even going through a course without one. We imagined a way to automate the tedious process of manually searching online stores which offer cheaper prices. ## What it does Deliber allows users to quickly enter a keyword search or ISBN and find the best prices for books online by consuming book pricing and currency conversion information from several upstream APIs provided by Amazon Web Services, Commission Junction, BooksRun, and Fixer.io. ## How we built it The backend processing dealing with upstream APIs is done in Go and client-side work is in Javascript with jQuery. Reverse proxying and serving of our website is done using the Apache web server. The user interface of the site is implemented using Bootstrap. The Go backend uses many open-source libraries such as the Go Amazon Product API, the Go Fixer.io wrapper, and the Go Validator package. ## Challenges we ran into Parsing of XML, and to a lesser extent JSON, was a significant challenge that prevented us from using PHP as one of the backend languages. User interface design was also an obstacle in the development of the site. A setback that befell us in the early stages of our planning was the rejection from the majority of online bookstores that we applied to for API access. Their main reason for rejection was the lack of content on the site since we could not write any code before the competition. We chose to persist in the face of this setback despite the resulting lack of vendors as the future potential of Deliber remained, and remains, high. ## Accomplishments that we're proud of In 24 hours, we built a practical tool which anybody can use to save money on the internet when buying books. During this short time period, we were able to quickly learn new skills and hone existing ones to tackle the aforementioned challenges we faced. ## What we learned In facing the challenges we encountered, we learned of the complexity of manipulating data coming from different sources with different schemas; the difficulty of processing this data in PHP in comparison to Go or Javascript; and the importance of consulting concise resources like the Mozilla Development Network Web Documentation. Additionally, the 24-hour time constraint of nwHacks showed us the importance of using open-source libraries to do low-level tasks rather than reinventing the wheel. ## What's next for Deliber Now that we have a functional site with the required content, we plan to reapply for API access to the online bookstores which previously rejected us. With more vendors comes lower prices for the users of Deliber. Additionally, API access to these vendors is coupled with affiliate status, which is a path towards making Deliber a self-sustaining entity through the commission earned from affiliate links.
> > **TL;DR**: We made a search engine designed to solve an information discovery problem that arises when you aren't sure what your query string should be; but rather have a specific document/webpage in mind that you want to find related work to. Instead of relying on the structural which exist between websites, we employ natural language processing to do this search and clustering contextually. As a result we achieve superhuman classification of not only expository documents but also code snippets. Click [here](https://docs.google.com/presentation/d/1NJb-3AHk8Ew8TBDrgL-6HgXhWXINnqWjI2rWQQkZm5c/edit?usp=sharing) to view the pitch deck. > > > ## See it in action: #### Identifies, searches and clusters sub-topics relevant to the contextual material on a given page > [View post on imgur.com](//imgur.com/8PlVucq) #### Determines algorithms and libraries used in code snippets from context ![](https://imgur.com/rmi5XNK.gif) #### Identifies implicit political context and bias ![](https://imgur.com/BQW5CpU.gif) #### Our Public-API endpoint received over 50,000 unique requests in under 36 hours ![](https://imgur.com/YiNcuB7.png) ## Inspiration With an ever changing world and a migration towards a digital landscape the average user can become overwhelmed with data and information. In this digital world humans have relied on Google searches to attain and discover their knowledge as opposed to conventional learning strategies. These searches can be classified into two subsets: knowledge discovery, where the question is undefined and direct searching, where the question is defined. 60% of searches worldwide are classified as unsuccessful meaning multiple searches were conducted before the desired result was attained. We were inspired to create a new solution where we recommend articles on a similar topic being searched to aid in the discovery and education process. We think by minimizing search time and frustration, finding the right data can be transformed into a journey instead of a pain point. We want everyone to indulge their curiosity in whatever topic, interest or random fact they are looking for. ## What it does We developed an API to expose the core algorithm “bubblRank”, made available through StdLib, anyone can query our API with a web page and receive a categorized and labelled arrangement of related pages. We show one such application of “bubblRank” by building a chrome extension that computes the bubbl cluster of any given page and provides the user with the option to navigate through the cluster in an intuitive way. The back end is powered by a self-designed (at this hackathon) state of the art natural language processing and clustering algorithm, which scrapes the meaningful text from websites in order to produce rich document representations of said websites in vector form, by averaging, and comparing their pairwise cosine similarities we are able to design a robust similarity metric to then perform Hierarchical Density Based Spatial Clustering in parallel. At every stage in the development of bubblRank we take several steps to ensure that the accuracy of our algorithm is not compromised whilst maintaining state of the art computation speed. Take for example the way in which we verify the accuracy of our document vector representations, we do a graph analysis using T-SNE plots to reduce the dimensionality of our vector space and compare the presence of clusters. We then take the Spearman’s R coefficient with respect to human tests to verify the clusters made. This attention to detail is prevalent through our entire project and it is something we are very proud of. ## How we built it Bubbl was built primarily on top of Java because of the language’s capability in parallelism making it more effective than Python (which we had originally considered) due to the fact that Java allows for cores share memory whereas Python does not. Angular and Javascript were used in the front end (web app) to facilitate a pleasant user experience. The core of the algorithm and API is exposed using StdLib and node.js. All preliminary data integration tasks were done in Python. ## Challenges we ran into The largest challenges and most prominent problem the team faced was comparing large sets of websites by similarity, which involved both accessing the data through queries, compressing the data into large vectors, semantic analysis over comparing vectors using either Euclidean space or cosine similarity and then understanding that similarity score and testing the scores. Other problems further in the project stemmed from parallel clustering and then building a strong back/front end to visually display the topics and similar articles in a innovative fashion. ## Accomplishments that we're proud of Parsing a large variety of websites and conducting TextRank with cluster algorithm Building out a Chrome web extension and back end with API calls to our clustering algorithm Comparing large corpuses of data and being able to encourage learning through a sophisticated back end similarity algorithm with a sleek UI. ## What we learned Demonstrated to our challenges, a lot of machine learning and data integration was tackled. Additionally project management was a valuable skill. ## What's next for bubbl Continuing to scale our infrastructure and expand the use cases for our API.
## Inspiration According to the Canadian Federation of Students, the average undergraduate student spends between $500-1000 on textbooks each term. Considering that courses run for 4 month terms, this expense is extremely unsustainable and is a burden on top of an already expensive tuition. A good solution would be to purchase used textbooks through online markeplaces such as Facebook or Kijiji , but these means have not been fully effective. One big issue is that the large volume of postings on these platforms make it difficult to keep track of the ones that are relevant for a particular student. On Facebook for instance, many users would keep notifications off for a marketplace group as they would otherwise be disturbed constantly, but this means they will also miss out on items that they need. Additionally, the constant printing of new textbooks seems like an unnecessary strain on the environment considering used books generally remain in fair conditions and contain the exact same content as newer counterparts. ## What it does Textbookify is a web app that allows students in university communities to create postings for textbooks that they wish to buy or sell. The platform then recommends potential buyers/sellers for each posting in order to facilitate increased used textbook usage. Real-time notifications of new recommendations are sent through SMS via the Twilio API. Furthermore, the app displays market analytics (e.g. high demand textbooks) in order to encourage students to participate in transactions. ## How we built it The frontend is built with Vue.js with a Node.js/express.js backend that interacts with a MongoDB database. ## Challenges we ran into Some of us were new to Vue.js. Debugging is difficult. Sleep was limited. ## Accomplishments that we're proud of We were able to create a functional product using technology new to us. ## What we learned MongoDB and Vue.js. ## What's next for textbookify * Text recognition from images in order to auto-populate buy/sell data fields * Incorporating data from other sources like Facebook marketplace
partial
## Inspiration Parking systems ## What it does ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for ParkingAssist
## Inspiration The inspiration for Park-Eazy came from the frustration of finding parking in busy urban areas, especially among students and daily commuters. With parking prices soaring and availability dwindling, we sought to create a platform that not only connects drivers with available parking spots but also helps space owners monetize their unused spaces. ## What it does Park-Eazy is an innovative platform designed to streamline the parking rental process. Users can easily browse available parking spaces in their vicinity, making it easier for them to secure affordable parking. The platform empowers space owners to list their unused spots, allowing them to earn extra income while helping drivers find convenient parking solutions. ## How we built it We built the frontend of Park-Eazy using React, ensuring a smooth and engaging user experience. The backend is powered by Python, which handles the application logic and connects to the database. We utilized Fetch.ai to create intelligent agents that fetch and display available parking spots dynamically. Additionally, we integrated a VAPI voice bot for customer care, allowing users to get assistance through voice commands. ## Challenges we ran into One of the primary challenges was ensuring seamless communication between the frontend and backend, particularly in managing API calls and responses. Integrating the AI functionalities from Fetch.ai required meticulous planning to ensure efficient performance. Moreover, building an intuitive and responsive user interface was crucial to enhance user engagement. Handling customer queries effectively through the VAPI voice bot added another layer of complexity. ## Accomplishments that we're proud of We are proud to have developed a functional prototype of Park-Eazy that successfully connects users with available parking spaces. The platform has undergone extensive testing and has received positive feedback, confirming its usability and effectiveness. Our integration of voice bot technology has also set us apart, making customer support more accessible. ## What we learned Through the development of Park-Eazy, we gained a deeper understanding of full-stack application development, from designing user interfaces to managing backend processes. We learned how to integrate AI technologies effectively and the importance of user-centered design. Gathering and incorporating user feedback played a vital role in refining our platform and enhancing its features. ## What's next for Park-Eazy Looking ahead, we plan to introduce advanced AI capabilities to provide personalized parking recommendations based on user behavior and preferences. We aim to expand our service to additional cities, addressing a broader audience. Additionally, we will work on improving the voice bot's capabilities, enhancing its interactivity and customer support functions. We are committed to creating a comprehensive solution that meets the evolving needs of urban parking.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
losing
## AI, AI, AI... The number of projects using LLMs has skyrocketed with the wave of artificial intelligence. But what if you *were* the AI, tasked with fulfilling countless orders and managing requests in real time? Welcome to chatgpME, a fast-paced, chaotic game where you step into the role of an AI who has to juggle multiple requests, analyzing input, and delivering perfect responses under pressure! ## Inspired by games like Overcooked... chatgpME challenges you to process human queries as quickly and accurately as possible. Each round brings a flood of requests—ranging from simple math questions to complex emotional support queries—and it's your job to fulfill them quickly with high-quality responses! ## How to Play Take Orders: Players receive a constant stream of requests, represented by different "orders" from human users. The orders vary in complexity—from basic facts and math solutions to creative writing and emotional advice. Process Responses: Quickly scan each order, analyze the request, and deliver a response before the timer runs out. Get analyzed - our built-in AI checks how similar your answer is to what a real AI would say :) ## Key Features Fast-Paced Gameplay: Just like Overcooked, players need to juggle multiple tasks at once. Keep those responses flowing and maintain accuracy, or you’ll quickly find yourself overwhelmed. Orders with a Twist: The more aware the AI becomes, the more unpredictable it gets. Some responses might start including strange, existential musings—or it might start asking you questions in the middle of a task! ## How We Built It Concept & Design: We started by imagining a game where the player experiences life as ChatGPT, but with all the real-time pressure of a time management game like Overcooked. Designs were created in Procreate and our handy notebooks. Tech Stack: Using Unity, we integrated a system where mock requests are sent to the player, each with specific requirements and difficulty levels. A template was generated using defang, and we also used it to sanitize user inputs. Answers are then evaluated using the fantastic Cohere API! Playtesting: Through multiple playtests, we refined the speed and unpredictability of the game to keep players engaged and on their toes.
## Inspiration We have to make a lot of decisions, all the time- whether it's choosing your next hackathon project idea, texting your ex or not, writing an argumentative essay, or settling a debate. Sometimes, you need the cold hard truth. Sometimes, you need someone to feed into your delusions. But sometimes, you need both! ## What it does Give the Council your problem, and it'll answer with four (sometimes varying) AI-generated perspectives! With 10 different personalities to choose from, you can get a bunch of (imaginary) friends to weigh in on your dilemmas, even if you're all alone! ## How we built it The Council utilizes OpenAI's GPT 3.5 API to generate responses unique to our 10 pre-defined personas. The UI was built with three.js and react-three-fiber, with a mix of open source and custom-built 3D assets. ## Challenges we ran into * 3D hard * merge conflict hard * Git is hard ## Accomplishments that we're proud of * AI responses that were actually very helpful and impressive * Lots of laughs from funny personalities * Custom disco ball (SHEEEEEEEEESH shoutout to Alan) * Sexy UI (can you tell who's writing this) ## What we learned This project was everyone's first time working with three.js! While we had all used OpenAI for previous projects, we wanted to put a unique spin on the typical applications of GPT. ## What's next for The Council We'd like to actually deploy this app to bring as much joy to everyone as it did to our team (sorry to everyone else in our room who had to deal with us cracking up every 15 minutes)
## Inspiration When learning new languages, it can be hard to find people and situations to properly practice, let alone finding the motivation to continue. As a group of second-generation immigrants who have taken both English and French classes, language is an embedded part of our identity. With Harbour, we sought to make the practice partner that we would want to use ourselves: speaking, listening, reading and writing under fun objective-based scenarios. ## What it does Harbour places you in potential real-life situations where natural conversations could arise, using a large-language model powered chatbot to communicate in real time. It will generate some scenarios where you can decide what language and vocabulary difficulty you’re aiming for. Adding on a fun twist, you’ll receive objectives for what to do: figure out what time it is, where the washroom is, or why your conversation partner is feeling upset. If you ever are confused about a word or phrase the chatbot uses, simply highlight the offending text and a friendly prompt to get a live translation will appear. Additional features include text-to-speech and speech-to-text, for the dual purposes to practice oral conversations as well as to provide maximum accessibility. ## How we built it The web application for Harbour was created using Next.js as a framework for React. Next.js enabled us to create our backend which returns data the frontend requires through API requests. These were powered by our usage of the LangChain model from OpenAI for the LLM conversation and Google Cloud API for translations. Using the Vercel AI SDK also allowed us to manipulate server side event streaming to get the responses from the LangChain model over to the frontend as we received parts of the response. Text-to-speech and speech-to-text conversions were handled through the Web Speech API and the node module react-speech-recognition. Axios and fetch were used to make HTTP requests to our backend as well as the APIs used. Across our development process, GitHub was used for version control so that our four team members could each work on separate portions of the project on various branches and merge them as the pieces fell together. ## Challenges we ran into Initially, our major blocker was that the OpenAI API LLM response would be slow to respond to our prompts. This was due to us attempting to receive the API response as one JSON file, which includes the full response of the API. Due to the nature of LLMs, they typically respond in portions of content, appearing similarly to a person typing a response. The process for the whole response to be generated takes a longer amount of time, but since they are generated portion by portion, readers can follow along as it is generated. However, we were attempting to wait until the full response was processed, causing the LLM to be slow and the user experience of the product to be poorer. In order to handle this slow response, we decided to receive the data from the API in the form of a stream instead, which would send the information portion by portion in order to be displayed for the users whilst it was being generated. However, our choice of processing the data in the form of a stream meant that it was no longer compatible with the API structure we were originally doing. This required us to restructure our API in order to process the data. We were successfully able to restructure our API in order to effectively handle the stream data and send it to the frontend that further displays the data in a similar progressively generating format to other LLM powered applications, as well as processes the data and displays it in audio format. ## Accomplishments that we're proud of We’re proud of our text-to-speech and speech-to-text features that operate alongside the messaging function of our application. These features enable learners to develop their reading, writing, listening, and speaking skills simultaneously. Additionally, they help those who may not be easily able to communicate with one skill still develop their language abilities. We’re also proud of our team’s quick ability to learn how to and develop a LLM application, whilst it being our first time working with this type of technology. We were able to effectively learn how to create LLM applications, structure ours, and deliver a seamless user experience and product within the short timeframe of two days. ## What we learned This was all of our team’s first time creating an LLM application. Harbour is a language service centered around dialogue. With the goal of fostering a realistic connection with users, we dove into technologies centered on receiving, processing, and delivering data, as well as a fluid user experience. Specifically, we learned how to efficiently deliver required prompts to the OpenAI API in order to refine and receive the intended answer, and also how to effectively create prompts the LLM can easily understand and follow through with, to prevent unintended features or mistakes. We also gained experience with Next.js by building a React framework for full stack development and structuring backend to process frontend requests. On the front end, we learned how to integrate React design and components with Next.js. ## What's next for Harbour Now that we’ve completed the MVP and some nice to have features, we would like to further improve Harbour by including more unique features and functionality. One main feature of Harbour currently is the unique, almost life-like situations and experiences that the user can go through to practice their language abilities. We would like to further improve our prompt engineering to continue creating more depth in our scenarios for the users to practice with. For example, we’d like to include more levels of difficulty in our scenarios, for different levels of learners to effectively learn. This could mean including more levels of vocabulary, grammar, and sentence structure difficulty. Additionally, we would like to improve the situations to include more variety, such as creating different personalities. For example, different situations could be creating personalities that can have avid political debates, or speak to the user in a professional workplace manner. One fun idea we’d like to implement in future versions is integrating famous personalities into Harbour. From strolling Hogwarts while chatting with Harry Potter to heating it up in the kitchen with Gordon Ramsay, who could resist talking with their favourite characters, all whilst developing their ability to speak in another language?
winning
## Meet Our Team :) * Lucia Langaney, first-time React Native user, first in-person hackathon, messages magician, snack dealer * Tracy Wei, first-time React Native user, first in-person hackathon, payments pro, puppy petter :) * Jenny Duan, first in-person hackathon, sign-up specialist, honorary DJ ## Inspiration First Plate was inspired by the idea that food can bring people together. Many people struggle with finding the perfect restaurant for a first date, which can cause stress and anxiety. By matching users with restaurants, First Plate eliminates the guesswork and allows users to focus on connecting with their potential partner over a shared culinary experience. In addition, food is a topic that many people are passionate about, so a food-based dating app can help users form deeper connections and potentially find a long-lasting relationship. After all- the stomach is the way to the heart. ## What it does Introducing First Plate, a new dating app that will change the way you connect with potential partners - by matching you with restaurants! Our app takes into account your preferences for cuisine location, along with your dating preferences such as age, interests, and more. With our app, you'll be able to swipe through restaurant options that align with your preferences and match with potential partners who share your taste in food and atmosphere. Imagine being able to impress your date with a reservation at a restaurant that you both love, or discovering new culinary experiences together. Not only does our app provide a fun and innovative way to connect with people, but it also takes the stress out of planning a first date by automatically placing reservations at a compatible restaurant. No more agonizing over where to go or what to eat - our app does the work for you. So if you're tired of the same old dating apps and want to spice things up, try our new dating app that matches people with restaurants. Who knows, you might just find your perfect match over a plate of delicious food! ## How we built it 1. Figma mockup 2. Built React Native front-end 3. Added Supabase back-end 4. Implemented Checkbook API for pay-it-forward feature 5. Connecting navigation screens & debugging 6. Adding additional features When developing a new app, it's important to have a clear plan and process in place to ensure its success. The first step we took was having a brainstorming session, where we defined the app's purpose, features, and goals. This helped everyone involved get on the same page and create a shared vision for the project. After that, we moved on to creating a Figma mockup, where we made visual prototypes of the app's user interface. This is a critical step in the development process as it allows the team to get a clear idea of how the app will look and feel. Once the mockup was completed, we commenced the React Native implementation. This step can be quite involved and requires careful planning and attention to detail. Finally, once we completed the app, we moved on to debugging and making final touches. This is a critical step in the process, as it ensures that the app is functioning as intended and any last-minute bugs or issues are resolved before submission. By following these steps, developers can create a successful app that meets the needs of its users and exceeds their expectations. ## Challenges we ran into The app development using React Native was extremely difficult, as it was our first time coding in this language. The initial learning curve was steep, and the vast amount of information required to build the app, coupled with the time constraint, made the process even more challenging. Debugging the code also posed a significant obstacle, as we often struggled to identify and rectify errors in the codebase. Despite these difficulties, we persisted and learned a great deal about the React Native framework, as well as how to debug code more efficiently. The experience taught us valuable skills that will be useful for future projects. ## Accomplishments that we're proud of We feel extremely proud of having coded First Plate as React Native beginners. Building this app meant learning a new programming language, developing a deep understanding of software development principles, and having a clear understanding of what the app is intended to do. We were able to translate an initial Figma design into a React Native app, creating a user-friendly, colorful, and bright interface. Beyond the frontend design, we learned how to create a login and sign-up page, securely connected to the Supabase backend, and integrated the Checkbook API for the "pay it forward" feature. Both of these features were also new to our team. Along the way, we encountered many React Native bugs, which were challenging and time-consuming to debug as a beginner team. We implemented front-end design features such as scroll view, flexbox, tab and stack navigation, a unique animation transition, and linking pages using a navigator, to create a seamless and intuitive user experience in our app. We are proud of our teamwork, determination, and hard work that culminated in a successful project. ## What we learned In the course of developing First Plate, we learned many valuable lessons about app development. One of the most important things we learned was how to implement different views, and navigation bars, to create a seamless and intuitive user experience. These features are critical components of modern apps and can help to keep users engaged and increase their likelihood of returning to the app. Another significant learning experience was our introduction to React Native, a powerful and versatile framework that allows developers to build high-quality cross-platform mobile apps. As previous Swift users, we had to learn the basics of this language, including how to use the terminal and Expo to write code efficiently and effectively. In addition to learning how to code in React Native, we also gained valuable experience in backend development using Supabase, a platform that provides a range of powerful tools and features for building, scaling, and managing app infrastructure. We learned how to use Supabase to create a real-time database, manage authentication and authorization, and integrate with other popular services like Stripe, Slack, and GitHub. Finally, we used the Checkbook API to allow the user to create digital payments and send digital checks within the app using only another user's name, email, and the amount the user wants to send. By leveraging these powerful tools and frameworks, we were able to build an app that was not only robust and scalable but also met the needs of our users. Overall, the experience of building First Plate taught us many valuable lessons about app development, and we look forward to applying these skills to future projects. ## What's next for First Plate First Plate has exciting plans for the future, with the main focus being on fully implementing the front-end and back-end of the app. The aim is to create a seamless user experience that is efficient, secure, and easy to navigate. Along with this, our team is enthusiastic about implementing new features that will provide even more value to users. One such feature is expanding the "Pay It Forward" functionality to suggest who to send money to based on past matches, creating a streamlined and personalized experience for users. Another exciting feature is a feed where users can share their dining experiences and snaps of their dinner plates, or leave reviews on the restaurants they visited with their matches. These features will create a dynamic community where users can connect and share their love for food in new and exciting ways. In terms of security, our team is working on implementing end-to-end encryption on the app's chat feature to provide an extra layer of security for users' conversations. The app will also have a reporting feature that allows users to report any disrespectful or inappropriate behavior, ensuring that First Plate is a safe and respectful community for all. We believe that First Plate is a promising startup idea implementable on a larger scale.
## Inspiration We got the idea for this app after one of our teammates shared that during her summer internship in China, she could not find basic over the counter medication that she needed. She knew the brand name of the medication in English, however, she was unfamiliar with the local pharmaceutical brands and she could not read Chinese. ## Links * [FYIs for your Spanish pharmacy visit](http://nolongernative.com/visiting-spanish-pharmacy/) * [Comparison of the safety information on drug labels in three developed countries: The USA, UK and Canada](https://www.sciencedirect.com/science/article/pii/S1319016417301433) * [How to Make Sure You Travel with Medication Legally](https://www.nytimes.com/2018/01/19/travel/how-to-make-sure-you-travel-with-medication-legally.html) ## What it does This mobile app allows users traveling to different countries to find the medication they need. They can input the brand name in the language/country they know and get the name of the same compound in the country they are traveling to. The app provides a list of popular brand names for that type of product, along with images to help the user find the medicine at a pharmacy. ## How we built it We used Beautiful Soup to scrape Drugs.com to create a database of 20 most popular active ingredients in over the counter medication. We included in our database the name of the compound in 6 different languages/countries, as well as the associated brand names in the 6 different countries. We stored our database on MongoDB Atlas and used Stitch to connect it to our React Native front-end. Our Android app was built with Android Studio and connected to the MongoDB Atlas database via the Stitch driver. ## Challenges we ran into We had some trouble connecting our React Native app to the MongoDB database since most of our team members had little experience with these platforms. We revised the schema for our data multiple times in order to find the optimal way of representing fields that have multiple values. ## Accomplishments that we're proud of We're proud of how far we got considering how little experience we had. We learned a lot from this Hackathon and we are very proud of what we created. We think that healthcare and finding proper medication is one of the most important things in life, and there is a lack of informative apps for getting proper healthcare abroad, so we're proud that we came up with a potential solution to help travellers worldwide take care of their health. ## What we learned We learned a lot of React Native and MongoDB while working on this project. We also learned what the most popular over the counter medications are and what they're called in different countries. ## What's next for SuperMed We hope to continue working on our MERN skills in the future so that we can expand SuperMed to include even more data from a variety of different websites. We hope to also collect language translation data and use ML/AI to automatically translate drug labels into different languages. This would provide even more assistance to travelers around the world.
## Inspiration As avid enjoyers of travellers, we often find it hard to find things to do, while staying within a budget. Just this winter, one of our group members went on vacation and went wildly over budget without realizing it. He often had trouble finding things to do every day and places to eat. So we built an app that shows the different user restaurants and things to do that fall under a certain price category. ## What it does We built a Web app that uses an algorithm to present different restaurants or things to do to the user. the user can then choose to save the information or pass on it. Depending on the user's choices our recommendation engine will adapt to better adapt to the user's preferences. The user can then access their saved lists as they are logged in through google authentication. ## How we built it Our website is hosted on Firebase, using FireStore as the database to store all the restaurants, things to do, and user data. To access our data we created a flask restful API that we can call using requests to perform queries, add data, and delete data from FireStore. Our FrontEnd was built using the react framework to create a dynamic web app for an elevated user experience. ## Challenges we ran into 2 challenges we ran into were figuring out how to deploy our REST API and converting our Figma designs to code. For the REST API we didn't have time to figure out how to use Azure or AWS as our hands were full with learning how to use Firebase. So we chose to resolve this by just hosting the REST API on our local computer. Our Figma Design Code came out to be impossible to work with and so we used it as a proof of concept and a learning opportunity for the future, but ultimately had to remake the web app. ## Accomplishments that we're proud of We learned how to implement a Database for data persistence. Gone are the days of saving data in CSV files. We also learned how to use Firebase which can be a valuable asset for our future projects. Lastly, we learned to create REST API's to access databases and connect our front end and back end. Overall, the sheer amount of learning we all did was a great accomplishment and we are very proud of the product that we ended up with, especially considering that this was all of our first hackathons. ## What we learned Firstly, we learned that Hackathons are hard, the number of hours and brain power that goes into creating a project in such a short period of time is not something to be underestimated. However, this hackathon taught us just how little we know about software development. We have a long way to go, but this hackathon was a stepping stone for all of our computer science journey. We all learned countless things, such as implementing databases, using new API's, creating API's, and we look forward to continuing to learn more every day. ## What's next for Street Savvy The next step would be to fully deploy the app by learning to use a service to host our API, add more locations to our app, and create mobile applications. What's great about Firebase is its expandability and prebuilt support for mobile applications. For us as a team, we will be continuing our studies, pursuing side projects, and looking for internships!
winning
## Inspiration Epidemiology is critical in figuring out how to stop the spread of diseases before it’s too late ## What it does ClassiFly uses image data to classify individuals with known disease symptoms. For demonstration purposes, we selected Yellow Fever and Methicillin-resistant Staphylococcus aureus, and Eelephantiasis. ## How I built it The app was developed in Swift, and the classification model was trained using a split data classifier method, which leveraged Apple's native CreateMLUI framework to build an image classifier model with 89% accuracy. ## Challenges We ran into We initially planned on building an autonomous drone for tracking that could be used to identifying certain key epidemiological characteristics in a medically unsafe and infected region. That is, this would effectively increase accessibility to remote areas that are susceptible to infection. However, there was no clear way to interface with the drone via an API so we decided to simply build a classification app that would allow you to use a drone image taken in a contaminated area and derive certain key epidemiological insights. ## Accomplishments that I'm proud of I am proud that we were able to successfully work together efficiently to build an image classification app. ## What I learned We learned how we navigate managing development projects as a team, as well as how to leverage really powerful computer vision capabilities with CoreML. ## What's next for ClassiFly What if we could have an army of medical detectives in the sky, able to reach the most remote populations? Briefly: Navigate to remote areas, collect image data of populus, use Machine Learning to classify afflictions based on visible symptoms. This paints a better picture of the disease landscape much faster than any human observation.
## Inspiration The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates. ## What it does Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies. ## How we built it 1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises. 2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG). 3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model. 4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations. ## Challenges we ran into 1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge. 2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure. 3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform. ## Accomplishments that we're proud of 1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations. 2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform. 3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts. ## What we learned 1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources. 2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately. 3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs. ## What's next for DisasterAid.ai 1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates. 2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses. 3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface. 4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai. 5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
## Inspiration What inspired us was the recent Omicron situation and how in general Covid19 has affected our life. We also decided to make a prototype since we were inspired by Figma that we got introduced a bit before the hackathon. ## What it does We first used MindSpore to train our custom data of Covid19 CT scans with the GoogLeNet convolutional neural network. On the second hand, we made a prototype of an app that would use the implementation of the trained model. Though we did not manage to implement it, but our prototype will simulate what we essentially would want it to look like. ## How we built it We build our model and trained it using MindSpore in the Huawei AI Platform. The data we used were from open public datasets. For the prototype, we used Figma. ## Challenges we ran into Mindspore framework was really different to other frameworks we are used to such as Keras. Getting used to know how MindSpore works and how we can train our model was challenging. ## Accomplishments that we're proud of We are proud that we managed to train our model with MindSpore and that we designed our first ever prototype on Figma. ## What we learned We learned a lot from this project. We first learned how MindSpore works and how we can create datasets and train the models. We also learned how to design an app with Figma. Overall, the whole process of developing and using machine learning was a good learning experience. ## What's next for Covid19 Detection App with Deep Learning To improve our model, we could use transfer learning with pretrained models. We could also use image enhancement techniques and super resolution to improve image quality. Another point would be to use more data.
partial
### Our inspiration Technology is becoming smarter and more accessible each year but many studies have found that productivity has hasn't always increased with increased innovation. In fact, productivity has even decreased according to studies done by Harvard. Companies like Google and Apple have begun to try to help users achieve a better balance in their lives with technology through their mobile devices but we want to take this a step further and create a way for people to get control over the many distractions on the internet and become more aware of their unproductive and productive habits. Currently, our Chrome extension allows users to create a list of unproductive websites like Facebook, Instagram, and others to block when they want to go into a more focused and productive state. With a simple click on the extension popup menu you can activate the focus mode and block out distracting sites. If users need a break they can then press another button to give themselves 15 minutes to recharge their creativity. In the extension's dashboard, users can reflect on and analyze their internet habits through engaging data visualizations pulled from their Android mobile device and Chrome history which are stored on Azure. In its fully realized form, Google's products would harmoniously bridge together the data on their users' habits to show in this dashboard and help them figure out more productive habits. ### How we built it We built this Chrome extension using JavaScript in both the front and back end. A material design framework was used to provide a seamless look between Google’s Digital Wellbeing mobile platform and our Chrome extension. An API was created and hosted with Azure for the back end. This allows digital usage data to be sent from the device and visualized on the desktop browser. In addition, IFTTT was used to put the phone in do not disturb mode once the user sets their Chrome extension into focus mode. ### Challenges As a team mostly consisting of design background, we faced lots of challenges with the implementation and learned a lot along the way. Managing asynchronous timers that could be checked on through our extension's interface was an especially tough one. ### What we're proud of We are very proud of the overall product opportunity we think we've identified and the UX flow we developed in wireframe screenshots that can be seen in the gallery below. We learned to be more realistic with our ambitions and perhaps start on on smaller scale projects for future hackathons, especially considering this was two of our members' first hackathons. ### What's next Next for our project would be to integrate real data from Google's services into the app and to flesh out creating multiple types of "Focus Mode" profiles that can even trigger music and other services that add to productivity.
## Inspiration **The Immigration Story** While brainstorming a problem to tackle in this challenge, we were intrigued by the lengths that many immigrants are willing to take for a new beginning. We wanted to come up with a way to try and ease this oftentimes difficult, intimidating transition to a new country and culture by creating a platform that would allow immigrants to connect with communities and resources relating to them. Current politics highlight the rising crisis of immigration and the major implications that it could have on current and future generations. An immigrants story does not end after they arrive in the US, their struggle can continue for years after their move. Hopefully, Pangea can bridge the gap between immigrants and their new environment, improving their lives in their new home. ## What is It? **There are no borders in Pangea** Pangea provides a user-friendly platform that immigrants can use to locate and connect with cultural resources, communities, and organizations near them. Our website fosters a close connection to cultural centers and allows immigrants to easily find resources near them that can ease their transition into the US. Some of Pangea's major features include an interactive heat map that shows various restaurants, shops, community centers, and resources based on the users' selected features and a plug-in to the telegram social media platform, an app commonly used by many immigrants. This plug-in links directly to a translation bot that allows users to practice their english or perform a quick translation if they need to do so. ## How We Built It **Used HTML, Javascript, CSS, PHP, a Google Map API, NLP, and a Telegram plugin** Using HTML and a Google Maps API we created a homepage and a heat map showing resources in the area that may be useful for immigrants. For the plug in, we found a Telegram plugin online that was simple to integrate into our code and create a helpful little translator accessible via a small bubble in the homepage. We also researched data for a couple of ethnicities and created code that added these data points to the heat map to show densities of what resources are where. ## Challenges As with any project, it was not always smooth sailing. Throughout our time creating this platform, we ran into many problems such as dealing with an unfamiliar language (most of us had not used Javascript before), tackling new programming techniques (such as building a website, creating a heat map, and utilizing API's and plug-ins), and fighting through the growing exhaustion as our lack of sleep caught up to us. ## Accomplishments Our team is extremely proud of what we have accomplished this weekend. As a team composed of freshman and sophomores with little to no experience with hackathons, we were able to produce a working website that incorporates many different features. Despite not having done many of the things we attempted this weekend, we succeeded in using plug-ins, an API, and natural language processing in our prototype. We also worked very well together as a team and formed bonds that will last long after HackMIT ends. ## What We Learned Throughout this experience, we learned many new skills that we will now be able to take with us in our future studies and projects. A few of the things we learned are listed below: * How to build a website * How to install a plug-in * How to integrate API's into our website * Natural Language Processing (NLP) * How to create a heat map * How to code in Javascript ## What's Next for Pangea? **Pangea knows no bounds** The applications for this site are near limitless. We hope to see Pangea grow as not only a resource for immigrants, but also a means of connection between them. In the future, we would like to incorporate profiles into our website, even expanding it into an app to create a social network for immigrants to connect with people of similar backgrounds, different cultures, and resources such as translators, lawyers, or social activists. In addition, we would like to add more data to our heat map to expand our reach past the local span of the Boston area. We plan to do this by scraping more data from the internet and including a more diverse scope of cultures in our database. Finally, we hope to further refine the translation bot by adding more language options so that Pangea will expand to even more cultural groups.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
losing
## Inspiration We were inspired by the genetic-algorithm-based Super Mario AI known as MarI/O made by SethBling. MarI/O uses genetic algorithms to teach a neural net to beat levels of Super Mario by maximizing an objective function. Inspired by this, we wanted to create a game that maximizes an objective function using genetic algorithms in order to present the player with a challenge. ## What it does A game designed to take EEG input to monitor and parse brain wave and stress-related data and produce a machine learned environment for the user to interact with. The game generates obstacles in the form of hurdles and walls, where the user can control their speed and position using the real-time data streamed from the Muse headband. ## How we built it We utilized the Muse API and research tools to help construct a script in Java that connected to the Muse port via TCP. We then coded in Java to produce our graphics and movement interfaces. We developed a machine learning neural network that constructed each of the game stages, generating obstacle models based off of previous iterations to increase the difficulty level. ## Challenges we ran into Being able to parse and convert the data feed from the Muse headband into a usable format was definitely a challenge that took us several hours to overcome. In addition, adjusting the parameters for our progressive machine learning to for a non-repetitive, but also feasible set of obstacles was another major challenge. ## Accomplishments that we're proud of I think just having the game environment that we produced and being able to run through it and interact with it is rewarding on its own. Along with the fact that this game has the potential to relieve stress levels and produce positive user feedback and impact, we all feel tremendously about the game that we have produced. ## What we learned We learned a lot about using breeding neural networks and different forms of data that can be utilized in novel and unique methods. ## What's next for iamhappy We definitely want to up our game on the UI and design side. We can allow for more user adjusted parameters and settings, to help fine tune each of the user preferences. In addition, we want to improve our visuals and design of each level and environment. With a more aesthetically appealing background, we can definitely reach a higher mark in our objective of reducing user stress levels.
## Inspiration: The inspiration for RehabiliNation comes from a mixture of our love for gaming, and our personal experiences regarding researching and working with those who have physical and mental disabilities. ## What it does: Provides an accessible gaming experience for people with physical disabilities and motivate those fighting through the struggles of physical rehabilitation. It can also be used to track the progress people make while going through their healing process. ## How we built it: The motion control arm band collects data using the gyroscope module linked to the Arduino board. It sends back the data to the Arduino serial monitor in the form of angles. We then use a python script to read the data from the serial monitor. It interprets the data into keyboard input, this allows us to interface with multiple games. Currently, it is used to play our Pac-man game which is written in java. ## Challenges we ran into: Our main challenges was determining how to utilize the gyroscope with the Arduino board and to trying to figure out how to receive and interpret the data with a python script. We also came across some issues with calibrating the motion sensors. ## Accomplishments that we're proud of Throughout our creation process, we all managed to learn about new technologies and new skills and programming concepts. We may have been pushed into the pool, but it was quite a fun way to learn, and in the end we came out with a finished product capable of helping people in need. ## What we learned We learned a great amount about the hardware product process, as well as the utilization of hardware in general. In general, it was a difficult but rewarding experience, and we thank U of T for providing us with this opportunity. ## What's next for RehabiliNation RehabiliNation will continue to refine our products in the future, including the use of better materials and more responsive hardware pieces than what was shown in today's proof of concept. Hopefully our products will be implemented by physical rehabilitation centres to help brighten the rehab process.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
partial
## Inspiration *When have you ever looked forward to opening your inbox?* Us all being college students tracking assignments, job placements, and personal email all doing that simultaneously can make the whole process soo overwhelming and tedious. And imagine during your midterms (real story!) your dream company sends you an invite mail and that gets buried in the mail. This is genuinely a life hack that we believe can save us (and many others) lots of time! ## What it does It's in the name! autoMate, it comes into the picture and solves the problem of manual work by automating all your job-tracking processes. It scrapes your emails and filters the email based on the job and gets the company name, job role, and current stage of the interview process. This will help overcome the errors involved in manual data entry. This is all done using some clever heuristics and NLP Modelling which will classify your model whether you have been accepted/rejected at your application, when is the due date, and anything else that might be needed! ## How we built it We approached the problem by researching how we can parse the emails effectively. We found Gmail API to do the same. Then Machine Learning Fanciness: It uses in-house classification algorithms such as Naive Bayes and Count Vectors with 0.92 recall to converting email text into a model in which we determine the status of the application. We have also extensively used the Google Clouds Natural Language platform to basically extract relevant entities that can be used to extract Job Organizations, relevant job locations, and the deadlines to apply. We have also used large data sets accumulated from Kaggle to create a data dictionary in which we are searching for job roles and companies' lists which can be exhaustive items to identify whether it is part of a company or not. Sometimes you do not always need a fancy ML algorithm and algorithmic thinking can really help instead. ## Challenges we ran into * Our initial idea was to use a third-party client OAuth to fetch the emails. However, Google blocked 3rd party apps from accessing email data this year. * Our next challenge was to distinguish between a 'job' email and a 'normal' email * Maybe sometimes fussing over that minor coding detail does not make that big of a deal. * Spread out the work, we all tend to work during the end, but if we all just do a constant amount of effort throughout the event, that seemingly daunting task becomes much easier. ## Accomplishments that we're proud of * First international in-person hackathon for many of us! Can you believe it? Built our very own ML algorithm with 0.92 recall, lots of room for improvement obviously! * A lot of us were very new to the front end and made a decent one! We think we actually made a tool that maybe we can use! It is able to scrape emails and find the relevant job-related data like the company name, the deadline for the next round like online assessment, interview, and if the final result of the job application was rejected or successful. ## What we learned Time management is key, sometimes the most obvious/trivial solution is the best one. A lot about modeling challenges when building a practical solution and what parameters we can optimize on. ## What's next for autoMate Implement a complex classifier for each of the subproblems of the applicant tracking process (Categorizing emails, finding sentiments valence and nuanced measures, also adding additional features like URLs, tracking multiple applications from the same company, and so on! Also, we plan to create a task scheduler based on the next date of the interview/assessment. This would help the user not miss any deadlines.
## Inspiration As university students, we are constantly pressured to prioritize recruiting while also balancing academics, extracurriculars, and well-being. We are expected to spend 10 hours or more on recruiting each week, and much of this time goes to mindlessly copying the same responses over and over again into job applications. We believe that with our application, we can significantly cut down these inefficiencies by storing and automatically filling repetitive portions of job applications. ## What it does Our hack comes in two parts: a website application and a Chrome extension. The website application serves as a hub for all data entry and progress updates regarding job applications. Here, the user can identify commonly asked questions and prepare static responses. When filling out a job application, the user can open the Chrome extension, which will identify questions it has stored responses for and automatically fill out those fields. ## How we built it The current demo of the application was made using Figma for the sketched and for the flow the app Overflow was used. The frond end application would be done using HTML, CSS, Javascript and a framework such as React. The extension was made using a manifest file, which includes the metadata for Chrome to be able to recognize it. For the purposes of the Hackathon, the data was stored as a JSON file, but for future development it would be help in a secured database system. ## Challenges we ran into From the beginning, we recognized that data security had to be a top priority, as our application could be storing possibly sensitive information regarding our users. We initially considered manually encrypting all of our stored data, but realized that we did not have the skills or resources to accomplish this task. In the end, we decided that it was in the best interest of our users and ourselves to outsource this to a professional data security company. This will not only ensure that our users' data is being kept secure, but also provide our users with peace of mind. ## Accomplishments that we're proud of We are proud that we were able to have a demo displaying the user interface for where they would be entering their data. Being able to for the first time, create a Figma X Overflow product concept was an accomplishment. Further more, none of us had previously built a Google Chrome Extension before and learning how to do that and creating basic functionality is something we are extremely proud of. ## What we learned We learned that there is a whole lot more that goes into making a pitch than initially expected. For instance, as a team we prioritized making our application functional, and it was only later that we realized our presentation needed to be more holistic approach, like including an action plan for development and deciding how we would finance the project. ## What's next for RE-Work We hope to explore the idea of building an accompanying mobile app to increase accessibility and convenience for the user. This way, the user can access and edit their information easily on the go. Additionally, this would allow for push notifications to keep the user up to date on every related to job searching, and ease of mobile pay (ex. Apple Pay) when upgrading to our premium subscription. ## What it does
## Inspiration Many investors looking to invest in startup companies are often overwhelmed by the sheer number of investment opportunities, worried that they will miss promising ventures without doing adequate due diligence. Likewise, since startups all present their data in a unique way, it is challenging for investors to directly compare companies and effectively evaluate potential investments. On the other hand, thousands of startups with a lot of potential also lack visibility to the right investors. Thus, we came up with Disruptive as a way to bridge this gap and provide a database for investors to view important insights about startups tailored to specific criteria. ## What it does Disruptive scrapes information from various sources: company websites, LinkedIn, news, and social media platforms to generate the newest possible market insights. After homepage authentication, investors are prompted to indicate their interest in either Pre/Post+ seed companies to invest in. When an option is selected, the investor is directed to a database of company data with search capabilities, scraped from Kaggle. From the results table, a company can be selected and the investor will be able to view company insights, business analyst data (graphs), fund companies, and a Streamlit Chatbot interface. You are able to add more data through a DAO platform, by getting funded by companies looking for data. The investor also has the option of adding a company to the database with information about it. ## How we built it The frontend was built with Next.js, TypeScript, and Tailwind CSS. Firebase authentication was used to verify users from the home page. (Company scraping and proxies for company information) Selenium was used for web scraping for database information. Figma was used for design, authentication was done using Firebase. The backend was built using Flask, StreamLit, and Taipy. We used the Circle API and Hedera to generate bounties using blockchain. SQL and graphQL were used to generate insights, OpenAI and QLoRa were used for semantic/similarity search, and GPT fine-tuning was used for few-shot prompting. ## Challenges we ran into Having never worked with Selenium and web scraping, we found understanding the dynamic loading and retrieval of web content challenging. The measures some websites have against scraping were also interesting to learn and try to work around. We also worked with chat-GPT and did prompt engineering to generate business insights - a task that can sometimes yield unexpected responses from chat-GPT! ## Accomplishments that we're proud of + What we learned We learned how to use a lot of new technology during this hackathon. As mentioned above, we learned how to use Selenium, as well as Firebase authentication and GPT fine-tuning. ## What's next for Disruptive Disruptive can implement more scrapers for better data in terms of insight generation. This would involve scraping from other options than Golden once there is more funding. Furthermore, integration between frontend and blockchain can be improved further. Lastly, we could generate better insights into the format of proposals for clients.
losing
## Inspiration I wanted to learn to create a website from scratch solely using Figma and plugins. ## What it does My website displays cute emojis alongside motivating text that says "I'm Fantas-tech" to alleviate effects of imposter syndrome in the tech industry. ## How we built it This simple website was designed on Figma. After the designs were created, I used the Locofy plugin to help me code and deploy my site on to <http://imfantas.tech> ## Challenges we ran into The Locofy plugin came with so many features it was overwhelming to utilize. Additionally, there were some issues around responsive renderings that were difficult to debug. ## Accomplishments that we're proud of I designed and deployed the website! ## What we learned Throughout the building process, I learned to use auto-layout and dev-mode on Figma! ## What's next for I'm Fantas-tech (<http://imfantas.tech>) I'd like to implement additional motivational text in the next iteration and enhance the responsiveness of the website!
## Inspiration As a team with diverse skill sets, ranging from computer science to art and philosophy, we wanted to build something that encapsulated a problem and an interest that we all shared. This intersection was design. We realized, we've all gravitated towards design at some point in our academic careers. However, we realized that most of our design experience was related to drawing or ideating about products, and not so much of actual implementation in tools like Figma. We realized that this is because there is an inherent barrier to learning how to use tools like Figma before designs and ideas can be realized. On this note, last year, almost 18% of surveyed marketers revealed that they spend an average of over 20 hours per week creating visual content. Inspired by the mass gains in productivity accomplished through tools such as ChatGPT for programmers (i.e. no more forgetting how to implement Union-Find then searching up on Google how to implement it (๑•̀ㅂ•́)و) , our team was inspired to create an AI-driven vision/language design assistant that can also decrease the amount of time new designers spend on the "in-between" processes, such as transferring their drawings over to a Figma board, or creating a component that is styled in a certain way. Thus, our mission is to empower people **new to UI/UX graphic design** to spend more time thinking creatively about an idea and less time figuring out how to get a curve at the perfect angle on Figma. ## What it does Our tool is an interactive AI-powered design assistant manifested as a Figma plugin, that streamlines and personalizes your design process by converting a drawing to a Figma design in seconds. Once the design is on the screen, you can then (in Figma!) use ChatGPT to ask for components that fit the style *you* are going for. This allows you to not waste time trying to re-draw rectangles, make sure the borders are rounded correctly, check the color... (you get the point). We wanted to leverage the power of large language models to interpret this certain styles of designs we are thinking of automatically, and give to us in Figma, thereby minimizing potentially redundant work. Since it is a plugin, there is minimal barrier to entry for new designers trying to get engaged in the visual design space. We note for full transparency that the description above is the ideal product. While we definitely made strides and had major accomplishments our team was proud of, our final coded product was a portion of this (more specifically, two components of the platform) that we didn't have time to fully integrate. However, we did make our ideas of turning some form of HTML to Figma Components a reality. We elaborate more on this below. ## How we built it Our team decided to split the project up into two primary components: 1) Going from drawing --> HTML 2) Creating a Figma plugin that can go from HTML --> Figma elements. The first part was developed by iterating on Microsoft's SketchToCode. While this was a useful starting point, most of the code was outdated to some degree, so our team had to refactor the code base to get the model to train on a set of hand-labeled drawing datasets. The model we trained was an instance segmentation model hosted in Azure to determine common visual components in drawings such as Buttons, TextFields, Text, etc. The second part involved using Figma's Plugin API which uses TypeScript and HTML, and drawing upon APIs such as OpenAI's ChatGPT as well as other Figma plugins that have functionality to translate websites from URLs into Figma components. While we were developing, we were able to track our progress by using Figma Desktop to visualize the UI/UX of our plugin, and observing how it translated to the output that we desired. We also hosted a server in Node.js and Express that makes calls to the ChatGPT API. ## Challenges we ran into During the hackathon, our team ran into several bugs with existing APIs. For instance, the team members working on the drawing to HTML portion of our project tried to iterate on Microsoft's SketchToCode AI, which is reasonably outdated. Thus, a lot of time was spent trying to refactor some of the code that we initially hoped would work out of the box. Another example is the incorporation ChatGPT into to the Figma plugin. Our team members were new to working with TypeScript, so there were several errors/debugging incorporating some node modules with the Figma plugins. ## Accomplishments that we're proud of We're proud after struggling for a long time on trying to incorporate ChatGPT into a Figma plugin (mostly due to Typescript issues), we were able to finally get to a point where users could input data into ChatGPT through a Figma plugin, and achieve some visual output on Figma components. We're also proud that we were able to train an instance segmentation model that obtained ~84% accuracy on predicting common UI components in drawn designs. Most importantly, we're proud that our team was able to stumble across an idea that we all thought was cool, and that we could see ourselves carrying out to completion over the next few weeks. ## What we learned From the technical standpoint, we learned that while it is usually a good idea to use existing APIs to get a project off the ground, sometimes its OK to start something new from scratch. We feel that a lot of the friction that prevented us from getting our product off the ground was attempting to use APIs that were inherently flawed / buggy. We also found a lot of interesting ways to interact with Figma programmatically . Prior to this hackathon, none of our team members had built Figma plugins. ## What's next for FigmAI Next steps include a full integration of the drawing to code pipeline with the html to Figma component pipeline. (i.e. adding images through Figma, fixing issues with styling, etc.) Moving forward, our team was also thinking about a similar pipeline but for CAD (which has an even higher barrier to entry for most). In other words, what if you could draw a component on paper, and then port that design to a CAD model to start with? Our team brainstormed ways to incorporate Reinforcement Learning algorithms to teach an agent in CAD / Figma to perform a set of actions to minimize the difference between a ground truth output and a model output.
## Inspiration As STEM students, many of us have completed online certification courses on various websites such as Udemy, Codeacademy, Educative, etc. Many classes on these sites provide the user with a unique certificate of completion after passing their course. We wanted to take the authentication of these digital certificates to the next level. ## What it does Our application functions as a site similar to the ones mentioned earlier; providing users with a plethora of certified online courses, but what sets us apart is our creative use of web3, allowing users to access their certificates directly from the blockchain, guaranteeing their authenticity to the utmost degree. ## How we built it For our frontend, we created out design in Figma and coded it using the Vue framework. Our backend was done in Python via the Flask framework. The database we used to store users and courses as SQLite. The certificate generation was accomplished in Python via the PILLOW library. To convert images in NFTs, we used Verbwire for their easy to use minting procedure. ## Challenges we ran into We ran into quite a few challenges throughout our project. The first of which was the fact that none of us had any meaningful web3 experience . Luckily for us, Verbwire had a quite straightforward minting process and even generated some of the code for us. ## Accomplishments that we're proud of Although our end result is not everything we dreamt of 24 hours ago, we are quite proud of what we were able to accomplish. We created quite an appealing website for our application. We creating a python script that generates custom certificates. We created a powerful backend capable of storing data for our users and courses. ## What we learned For many of us, this was a new and unique collaborative experience in software development. We learned quite a bit on task distribution and optimization as well as key takeaways for creating code that is not only maintainable, but also transferable to other developers during the development process. More technically, we learned how to create simple databases via SQLite, we learned how to automate image generation via Python, and learned the steps of making a unique and appealing front-end design, starting from the prototype all the way to the final product. ## What's next for DiGiDegree Moving forward, we would like to migrate our database to Postgres to handle higher traffic. We would also like to implement a Redis cache to improve hit-ratio and speed up search times. We also like to populate out website with more courses and improve our backend security by abstracting away SQL Queries to protect us further from SQL injection attacks.
losing
Welcome to our demo video for our hack “Retro Readers”. This is a game created by our two man team including myself Shakir Alam and my friend Jacob Cardoso. We are both heading into our senior year at Dr. Frank J. Hayden Secondary School and enjoyed participating in our first hackathon ever, Hack The 6ix a tremendous amount. We spent over a week brainstorming ideas for our first hackathon project and because we are both very comfortable with the idea of making, programming and designing with pygame, we decided to take it to the next level using modules that work with APIs and complex arrays. Retro Readers was inspired by a social media post pertaining to another text font that was proven to help mitigate reading errors made by dyslexic readers. Jacob found OpenDyslexic which is an open-source text font that does exactly that. The game consists of two overall gamemodes. These gamemodes aim towards an age group of mainly children and young children with dyslexia who are aiming to become better readers. We know that reading books is becoming less popular among the younger generation and so we decided to incentivize readers by providing them with a satisfying retro-style arcade reading game. The first gamemode is a read and research style gamemode where the reader or player can press a key on their keyboard which leads to a python module calling a database of semi-sorted words from Wordnik API. The game then displays the word back to the reader and reads it aloud using a TTS module. As for the second gamemode, we decided to incorporate a point system. Using the points the players can purchase unique customizables and visual modifications such as characters and backgrounds. This provides a little dopamine rush for the players for participating in a tougher gamemode. The gamemode itself is a spelling type game where a random word is selected using the same python modules and API. Then a TTS module reads the selected word out loud for readers. The reader then must correctly spell the word to attain 5 points without seeing the word. The task we found the most challenging was working with APIs as a lot of them were not deemed fit for our game. We had to scratch a few APIs off the list for incompatibility reasons. A few of these APIs include: Oxford Dictionary, WordsAPI and more. Overall we found the game to be challenging in all the right places and we are highly satisfied with our final product. As for the future, we’d like to implement more reliable APIs and as for future hackathons (this being our first) we’d like to spend more time researching viable APIs for our project. And as far as business practicality goes, we see it as feasible to sell our game at a low price, including ads and/or pad cosmetics. We’d like to give a special shoutout to our friend Simon Orr allowing us to use 2 original music pieces for our game. Thank you for your time and thank you for this amazing opportunity.
## Inspiration It was our first time at a Hackathon, so we decided to put our own enjoyment and quality of time there over accomplishing any of the verticals/ challenges set out by Treehacks itself. We made a promise to ourselves that as long as we were proud with what we created regardless of how insignificant or lackluster it may be compared to the real breadwinners, we’d be successful. So we started by just throwing random pitches about things we may be able to do. Eventually, Michelle suggested using Ren’Py to create a game. At first Kendrick thought it was a joke, but we ultimately decided to go through with it, and here we are now. ## What it does It’s a short visual novel game that puts you in the shoes of a high school student transferring to a new school. You, the protagonist, recently purchased a new laptop. However, you run into the struggle of not knowing which internet browser is right for you. When you attend school, you run into three students, Google Chrome, Mozilla Firefox, and Safari, who all have different personality types. Your experience with them will help you decide which internet browser you would like to use in the end! ## How we built it We used a program called Ren’Py that helps to create visual novel type of games. The program was made using Python and Cython, but writing out the actual code uses Python. ## Challenges we ran into Kendrick: I didn’t know how to use Ren’Py, so I had to adopt the “monkey see monkey do” mindset. It was rough, but I thoroughly enjoyed the time spent on this project. Michelle: One thing I felt like was a big problem was the amount of time that we were given. Me, with poor time management skills, was always distracted, but I was able to pull through and get the game finished. ## Accomplishments that we're proud of Being the first Hackathon we have ever attended, the entire game was a huge accomplishment for us. We had fun working together, laughing at all the scenarios and ideas that we came up with. Because we struggled so hard in the beginning, we feel so accomplished that we learned from these struggles and that we were able to pull through and get as much finished as we could. Although we did not entirely complete what we had planned for the story, we enjoyed ourselves and had a great time. ## What we learned Time management: sometimes one has to buckle down and just push through to get things accomplished. Teamwork: We were pretty much at different skill levels when starting this project. However, we both motivated each other to sit ourselves down and get cracking on this project. Relax: The last thing one wants to do at a Hackathon is stress out! People barely get enough sleep already. Working at one's own pace brought us a much more enjoyable experience at this first hackathon. ## What's next for Browsing for the Right One! Browsing for the Right One! Was ultimately cut short of what it was projected to be. So, as a possible future endeavor, we may expand the content of this shell of a game by adding what we intentionally wanted inside it before the reality of the situation took its toll: a deeper plot, more choices for the player to make, as well as multiple endings.
## Inspiration Our team focuses on real-world problems. One of our own classmates is blind, and we've witnessed firsthand the difficulties he encounters during lectures, particularly when it comes to accessing the information presented on the board. It's a powerful reminder that innovation isn't just about creating flashy technology; it's about making a tangible impact on people's lives. "Hawkeye" isn't a theoretical concept; it's a practical solution born from a genuine need. ## What it does "Hawkeye" utilizes Adhawk MindLink to provide essential visual information to the blind and visually impaired. Our application offers a wide range of functions: * **Text Recognition**: "Hawkeye" can read aloud whiteboard text, screens, and all text that our users would not otherwise see. * **Object Identification**: The application identifies text and objects in the user's environment, providing information about their size, shape, and position. * **Answering Questions** Hawkeye takes the place of Google for the visually impaired, using pure voice commands to search ## How we built it We built "Hawkeye" by combining state-of-the-art computer vision and natural language processing algorithms with the Adhawk MindLink hardware. The development process involved several key steps: 1. **Data Collection**: We used open-source AI models to recognize and describe text and object elements accurately. 2. **Input System**: We developed a user-friendly voice input system that can be picked up by anyone. 3. **Testing and Feedback**: Extensive testing and consultation with the AdHawk team was conducted to fine-tune the application's performance and usability. ## Challenges we ran into Building "Hawkeye" presented several challenges: * **Real-time Processing**: We knew that real-time processing of so much data on a wearable device was possible, but did not know how much latency there would be. Fortunately, with many optimizations, we were able to get the processing to acceptable speeds. * **Model Accuracy**: Ensuring high accuracy in text and object recognition, as well as facial recognition, required continuous refinement of our AI models. * **Hardware Compatibility**: Adapting our software to work effectively with Adhawk MindLink's hardware posed compatibility challenges that we had to overcome. ## Accomplishments that we're proud of We're immensely proud of what "Hawkeye" represents and the impact it can have on the lives of blind and visually impaired individuals. Our accomplishments include: * **Empowerment**: Providing a tool that enhances the independence and quality of life for visually impaired individuals. To no longer rely upon transcribers and assistants is something that real * **Inclusivity**: Breaking down barriers to education and employment, making these opportunities more accessible. * **Innovation**: Combining cutting-edge technology and AI to create a groundbreaking solution for a pressing societal issue. * **User-Centric Design**: Prioritizing user feedback and needs throughout the development process to create a genuinely user-friendly application. ## What we learned Throughout the development of "Hawkeye," we learned valuable lessons about the power of technology to transform lives. Key takeaways include: * **Empathy**: Understanding the daily challenges faced by visually impaired individuals deepened our empathy and commitment to creating inclusive technology. * **Technical Skills**: We honed our skills in computer vision, natural language processing, and hardware-software integration. * **Ethical Considerations**: We gained insights into the ethical implications of AI technology, especially in areas like facial recognition. * **Collaboration**: Effective teamwork and collaboration were instrumental in overcoming challenges and achieving our goals. ## What's next for Hawkeye The journey for "Hawkeye" doesn't end here. In the future, we plan to: * **Expand Functionality**: We aim to enhance "Hawkeye" by adding new features and capabilities, such as enhanced indoor navigation and support for more languages. * **Accessibility**: We will continue to improve the user experience, ensuring that "Hawkeye" is accessible to as many people as possible. * **Partnerships**: Collaborate with organizations and institutions to integrate "Hawkeye" into educational and workplace environments. * **Advocacy**: Raise awareness about the importance of inclusive technology and advocate for its widespread adoption. * **Community Engagement**: Foster a supportive user community for sharing experiences, ideas, and feedback to further improve "Hawkeye." With "Hawkeye," our vision is to create a more inclusive and accessible world, where visual impairment is no longer a barrier to achieving one's dreams and aspirations. Together, we can make this vision a reality.
losing
## Hack The Valley 4 Hack the Valley 2020 project ## On The Radar **Inspiration** Have you ever been walking through your campus and wondered what’s happening around you, but too unmotivated to search through Facebook, the school’s website and where ever else people post about social gatherings and just want to see what’s around? Ever see an event online and think this looks like a lot of fun, just to realize that the event has already ended, or is on a different day? Do you usually find yourself looking for nearby events in your neighborhood while you’re bored? Looking for a better app that could give you notifications, and have all the events in one and accessible place? These are some of the questions that inspired us to build “On the Radar” --- a user-friendly map navigation system that allows users to discover cool, real-time events that suit their interests and passion in the nearby area. *Now you’ll be flying over the Radar!* **Purpose** On the Radar is a mobile application that allows users to match users with nearby events that suit their preferences. The user’s location is detected using the “standard autocomplete search” that tracks your current location. Then, the app will display a customized set of events that are currently in progress in the user’s area which is catered to each user. **Challenges** * Lack of RAM in some computers, see Android Studio (This made some of our tests and emulations slow as it is a very resource-intensive program. We resolved this by having one of our team members run a massive virtual machine) * Google Cloud (Implementing google maps integration and google app engine to host the rest API both proved more complicated than originally imagined.) * Android Studio (As it was the first time for the majority of us using Android Studio and app development in general, it was quite the learning curve for all of us to help contribute to the app.) * Domain.com (Linking our domain.com name, flyingovertheradar.space, to our github pages was a little bit more tricky than anticipated, needing a particular use of CNAME dns setup.) * Radar.io (As it was our first time using Radar.io, and the first time implementing its sdk, it took a lot of trouble shooting to get it to work as desired.) * Mongo DB (We decided to use Mongo DB Atlas to host our backend database needs, which took a while to get configured properly.) * JSON objects/files (These proved to be the bain of our existence and took many hours to get them to convert into a usable format.) * Rest API (Getting the rest API to respond correctly to our http requests was quite frustrating, we had to use many different http Java libraries before we found one that worked with our project.) * Java/xml (As some of our members had no prior experience with both Java and xml, development proved even more difficult than originally anticipated.) * Merge Conflicts (Ah, good old merge conflicts, a lot of fun trying to figure out what code you want to keep, delete or merge at 3am) * Sleep deprivation (Over all our team of four got collectively 24 hours of sleep over this 36 hour hackathon.) **Process of Building** * For the front-end, we used Android Studio to develop the user interface of the app and its interactivity. This included a login page, a registration page and our home page in which has a map and events nearby you. * MongoDB Atlas was used for back-end, we used it to store the users’ login and personal information along with events and their details. * This link provides you with the Github repository of “On the Radar.” <https://github.com/maxerenberg/hackthevalley4/tree/master/app/src/main/java/com/hackthevalley4/hackthevalleyiv/controller> * We also designed a prototype using Figma to plan out how the app could potentially look like. The prototype’s link → <https://www.figma.com/proto/iKQ5ypH54mBKbhpLZDSzPX/On-The-Radar?node-id=13%3A0&scaling=scale-down> * We also used a framework called Bootstrap to make our website. In this project, our team uploaded the website files through Github. The website’s code → <https://github.com/arianneghislainerull/arianneghislainerull.github.io> The website’s link → <https://flyingovertheradar.space/#> *Look us up at* # <http://flyingovertheradar.space>
## Inspiration In the United States, every 11 seconds, a senior is treated in the emergency room for a fall. Every 19 minutes, an older adult dies from a fall, directly or indirectly. Deteriorating balance is one of the direct causes of falling in seniors. This epidemic will only increase, as the senior population will double by 2060. While we can’t prevent the effects of aging, we can slow down this process of deterioration. Our mission is to create a solution to senior falls with Smart Soles, a shoe sole insert wearable and companion mobile app that aims to improve senior health by tracking balance, tracking number of steps walked, and recommending senior-specific exercises to improve balance and overall mobility. ## What it does Smart Soles enables seniors to improve their balance and stability by interpreting user data to generate personalized health reports and recommend senior-specific exercises. In addition, academic research has indicated that seniors are recommended to walk 7,000 to 10,000 steps/day. We aim to offer seniors an intuitive and more discrete form of tracking their steps through Smart Soles. ## How we built it The general design of Smart Soles consists of a shoe sole that has Force Sensing Resistors (FSRs) embedded on it. These FSRs will be monitored by a microcontroller and take pressure readings to take balance and mobility metrics. This data is sent to the user’s smartphone, via a web app to Google App Engine and then to our computer for processing. Afterwards, the output data is used to generate a report whether the user has a good or bad balance. ## Challenges we ran into **Bluetooth Connectivity** Despite hours spent on attempting to connect the Arduino Uno and our mobile application directly via Bluetooth, we were unable to maintain a **steady connection**, even though we can transmit the data between the devices. We believe this is due to our hardware, since our HC05 module uses Bluetooth 2.0 which is quite outdated and is not compatible with iOS devices. The problem may also be that the module itself is faulty. To work around this, we can upload the data to the Google Cloud, send it to a local machine for processing, and then send it to the user’s mobile app. We would attempt to rectify this problem by upgrading our hardware to be Bluetooth 4.0 (BLE) compatible. **Step Counting** We intended to use a three-axis accelerometer to count the user’s steps as they wore the sole. However, due to the final form factor of the sole and its inability to fit inside a shoe, we were unable to implement this feature. **Exercise Repository** Due to a significant time crunch, we were unable to implement this feature. We intended to create a database of exercise videos to recommend to the user. These recommendations would also be based on the balance score of the user. ## Accomplishments that we’re proud of We accomplished a 65% success rate with our Recurrent Neural Network model and this was our very first time using machine learning! We also successfully put together a preliminary functioning prototype that can capture the pressure distribution. ## What we learned This hackathon was all new experience to us. We learned about: * FSR data and signal processing * Data transmission between devices via Bluetooth * Machine learning * Google App Engine ## What's next for Smart Soles * Bluetooth 4.0 connection to smartphones * More data points to train our machine learning model * Quantitative balance score system
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
partial
Wanted to try something low-level! MenuMate is a project aimed at enhancing dining experiences by ensuring that customers receive quality, safe, and delicious food. It evaluates restaurants using health inspection records and food-site reviews, initially focusing on Ottawa with plans for expansion. Built on React, the tool faced integration challenges with frameworks and databases, yet achieved a seamless front and backend connection. The current focus includes dataset expansion and technical infrastructure enhancement. The tool scrapes data from websites and reads JSON files for front-end display, primarily using technologies like BeautifulSoup, React, HTML, CSS, and JavaScript. The team encountered challenges, as it was their first experience with web scraping and faced difficulties in displaying data.
## Inspiration It manage all my passwords manually. ## What it does It store all of my passwords. ## How I built it Using react native ## Challenges I ran into Designing security into the app. Parts of the visual design, and frontend implementation. ## Accomplishments that I'm proud of How fast we was able to go from design to implement, on both the front and backend. ## What I learned I learned about security, native mobile development, networking. ## What's next for Citadel Biometric Authentication Browser access More user friendly design Backend Frontend connection more data encryption
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
losing
# Date-the-6ix ## Hack the 6ix project The story behind DateThe6ix is that we wanted an mobile app where users could look through events happening on days they were free, and plan out their day, which includes what events they're interested in attending and different restaurants they want to attend. **How it works**: Ideally, users would be able to log in with Facebook, and browse events and restaurants in their area. If they were interested in an event or restaurant, they would swipe right to shortlist those events, otherwise they would swipe left. Because the app is connected to Facebook, the user would be able to see events that their friends were interested in, and message them if they also swiped right on the event. Before the event, the app would send a reminder notification. **How it works**: The events are pulled from Facebook using Facebook API. Our backend Java code is able to take in a query for what the user is interested in and search for related events. The restaurant data is pulled from Yelp using the Yelp API. The front end of the app is created using a platform called "Outsystems" which is used for creating the UI of Android apps. We were able to run the backend code to collect data and fill the database in Outsystems, however due to time restrictions we were unable to connect the backend to the front end to be executed at run time. The data in the database in Outsystems is from a previously run of our backend code. We also did not have enough time to limit the events to events only near the user. In the future we would want to implement functions to further parse the JSON response of the Facebook API and get the location data. **Challenges**: None of us were familiar with Java so learning Java to implement the backend and fire HTTP requests was a challenge. We were also trying to execute our Java code in the terminal which involved the setting of environment variables which created some issues. For the front end, we realized that the platform was not very flexible and it was difficult to create a nice UI with standard CSS. Integrating the UI with code was also a complicated process. **We are proud of**: We really like our idea and the swiping interface that we managed to create. We're also proud of being able to use APIs and store data in an excel file using our Java backend. For some members it was also the first time doing HTTP requests in the powershell successfully. **What we learned**: Using Yelp, Facebook API, Java, Outsystems, Android Studio is more flexible that Outsystems **What's next**: Orbis Challenge hackathon **Built with**: Facebook API, Yelp API, Outsystems, Java
## Inspiration With the world in a technology age, it is easy to lose track of human emotions when developing applications to make the world a better place. Searching for restaurants using multiple filters and reading reviews is often times inefficient, leading the customer to give up searching and settle for something familiar. With a more personal approach, we hope to connect people to restaurants that they will love. ## What it does Using Indico’s machine learning API for text analysis, we are able to create personality profiles for individuals and recommend them restaurants from people of a similar personality. ## How we built it Backend: We started with drafting the architecture of the application, then defining the languages, frameworks and API's to be used within the project. We then proceeded on the day of the hackathon to create a set of mock data from the Yelp dataset. The dataset is then imported into MongoDB and managed through Mlab. In order to query the data, we used Node.js and Mongoose to communicate with the database. Frontend: The front end is built off of the semantic ui framework. We used default layouts to start and then built on top of them as new functionality was required. The landing page was developed from something a member had done in the past using modernizer and bootstrap slideshow functionality to rotate through background images. Lastly we used ejs as our templating language as it integrates with express very easily. ## Challenges we ran into 1. We realized that the datasets we've compiled was not diverse enough to show a wide range of possible results. 2. The team had an overall big learning curve throughout the weekend as we all were picking up some new languages along the way. 3. There was an access limit to the resources that we were using towards testing efforts for our application, which we never predicted. ## Accomplishments that we're proud of 1. Learning new web technologies, frameworks and APIs that are available and hot in the market at the moment! 2. Using the time before the hackathon to brainstorm and discussing a little more in depth of each team member's task. 3. Collaboratively working together using version control through Git! 4. Asking for help and guidance when needed, which leads to a better understanding of how to implement certain features. ## What we learned Node.js, Mongoose, Mlab, Heroku, No SQL Databases, API integration, Machine Learning & Sentiment Analysis! ## What's next for EatMotion We hope that with our web app and with continued effort, we may be able to predict restaurant preferences for people with a higher degree of accuracy than before.
## Inspiration We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had. ## What it does Insurity takes a video feed from a person driving and evaluates it for risky behavior. ## How we built it We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors. ## Challenges we ran into This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning.
losing
## Inspiration As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system ## What it does We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning ## How we built it Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle ## Challenges we ran into One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly ## Accomplishments that we're proud of Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight! ## What we learned We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future ## What's next for Anti-Bumper Car - A Collision Avoidance System Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
## Inspiration At the University of Toronto, accessibility services are always in demand of more volunteer note-takers for students who are unable to attend classes. Video lectures are not always available and most profs either don't post notes, or post very imprecise, or none-detailed notes. Without a doubt, the best way for students to learn is to attend in person, but what is the next best option? That is the problem we tried to tackle this weekend, with notepal. Other applications include large scale presentations such as corporate meetings, or use for regular students who learn better through visuals and audio rather than note-taking, etc. ## What it does notepal is an automated note taking assistant that uses both computer vision as well as Speech-To-Text NLP to generate nicely typed LaTeX documents. We made a built-in file management system and everything syncs with the cloud upon command. We hope to provide users with a smooth, integrated experience that lasts from the moment they start notepal to the moment they see their notes on the cloud. ## Accomplishments that we're proud of Being able to integrate so many different services, APIs, and command-line SDKs was the toughest part, but also the part we tackled really well. This was the hardest project in terms of the number of services/tools we had to integrate, but a rewarding one nevertheless. ## What's Next * Better command/cue system to avoid having to use direct commands each time the "board" refreshes. * Create our own word editor system so the user can easily edit the document, then export and share with friends. ## See For Your Self Primary: <https://note-pal.com> Backup: <https://danielkooeun.lib.id/notepal-api@dev/>
## Inspiration Traffic is a pain and hurdle for everyone. It costs time and money for everyone stuck within it. We wanted to empower everyone to focus on what they truly enjoy instead of having to waste their time in traffic. We found the challenge to connect autonomous vehicles and enable them to work closely with each other to make maximize traffic flow to be very interesting. We were specifically interested in aggregating real data to make decisions and evolve those over time using artificial intelligence. ## What it does We engineered an autonomous network that minimizes the time delay for each car in the network as it moves from its source to its destination. The idea is to have 0 intersections, 0 accidents, and maximize traffic flow. We did this by developing a simulation in P5.js and training a network of cars to interact with each other in such a way that they do not collide and still travel from their source to target destination safely. We slowly iterated on this idea by first creating the idea of incentivizing factors and negative points. This allowed the cars to learn to not collide with each other and follow the goal they're set out to do. After creating a full simulation with intersections (allowing cars to turn and drive so they stop the least number of times), we created a simulation on Unity. This simulation looked much nicer and took the values trained by our best result from our genetic AI. From the video, we can see that the generation is flawless; there are no accidents, and traffic flows seamlessly. This was the result of over hundreds of generations of training of the genetic AI. You can see our video for more information! ## How I built it We trained an evolutionary AI on many physical parameters to optimize for no accidents and maximal speed. The allowed the AI to experiment with different weights for each factor in order to reach our goal; having the cars reach from source to destination while staying a safe distance away from all other cars. ## Challenges we ran into Deciding which parameters to tune, removing any bias, and setting up the testing environment. To remove bias, we ended up introducing randomly generated parameters in our genetic AI and "breeding" two good outcomes. Setting up the simulation was also tricky as it involved a lot of vector math. ## Accomplishments that I'm proud of Getting the network to communicate autonomously and work in unison to avoid accidents and maximize speed. It's really cool to see the genetic AI evolve from not being able to drive at all, to fully being autonomous in our simulation. If we wanted to apply this to the real world, we can add more parameters and have the genetic AI optimize to find the parameters needed to reach our goals in the fastest time. ## What I learned We learned how to model and train a genetic AI. We also learned how to deal with common issues and deal with performance constraints effectively. Lastly, we learned how to decouple the components of our application to make it scalable and easier to update in the future. ## What's next for Traffix We want to increasing the user-facing features for the mobile app and improving the data analytics platform for the city. We also want to be able to extend this to more generalized parameters so that it could be applied in more dimensions.
winning
## welcome to Catmosphere! we wanted to make a game with (1) cats and (2) cool art. inspired by the many "cozy indie" games on steam and on social media, we got working on a game where the cat has to avoid all the obstacles as it attempts to go into outer space. **what it does**: use the WASD keys to navigate our cat around the enemies. enter the five levels of the atmosphere and enjoy the art and music while you're at it! **what's next for Catmosphere**: adding more levels, a restart button, & a new soundtrack and artwork
## What it does XEN SPACE is an interactive web-based game that incorporates emotion recognition technology and the Leap motion controller to create an immersive emotional experience that will pave the way for the future gaming industry. # How we built it We built it using three.js, Leap Motion Controller for controls, and Indico Facial Emotion API. We also used Blender, Cinema4D, Adobe Photoshop, and Sketch for all graphical assets.
## Inspiration We all have friends who really love cats, and thought a desktop cat would be a simple but effective way to keep people's spirits up. ## What it does It acts cute on your desktop, sleeping and waking up, wagging its tail etcetera. Occasionally Mishookoo will make cute and motivating comments to encourage self-care and improve mood. ## How we built it We used an online tutorial and then went off and modified it to use our art assets which we also created ourselves and added speaking functionality where it reads off a text file populated with lines for it to say. ## Challenges we ran into The tutorial really did not work well and wasn't well explained so we essentially had to reverse engineer it and rewrite it to suit our specific implementation. ## Accomplishments that we're proud of It's cute, swaps animations smoothly, and is a unique but also retro idea. ## What we learned That python can be surprisingly painful. ## What's next for Mishookoo | DeskCat If we can get people to use it we'll try and get it to run a little smoother, add more lines and polish the art. Maybe expand functionality.
winning
## Inspiration We were inspired by our passion for boba tea and the wide variety of different combinations you could make. ## What it does It allows you to build a personalized boba tea. ## How we built it We used html, css, and javascript to create this. ## Challenges we ran into We faced the challenge of having enough time and having a lack of knowledge about the languages we used. ## Accomplishments that we're proud of We're proud how visually appealing it looks and the effort we put in. ## What we learned We learned more about the languages we used and that fruit teas are healthier than milk teas. ## What's next for Boba Baby We hope that we can add a greater variety of options!
## Inspiration What most inspired this project was the fact that we are planning on going to college, and in college, it can be notoriously difficult to find healthy, delicious food when the dining hall isn't available. We also don't have recipes on hand, so a feature that would create recipes based on what YOU have would be incredibly helpful. It could also be very helpful for those in financially difficult situations, as they can find meals that suit their current situation. ## What it does You can input the different ingredients that you have, and the program will return recipes from the database with the same ingredients. It will list the recipes in order of which ingredients fully match up with what they input. ## How we built it We did full stack development and split into two groups, one for the client side and one for the server side. The server side created the mongoDB database and sent data from there to the client server. The client side worked a lot with HTML and CSS to make it user friendly. ## Challenges we ran into Server Side coding was incredibly difficult. Because we are not as familiar with these programs, it was difficult to get a grasp on how different languages worked. We not only faced the language barrier, but each language had its own restrictions of what it can and cannot do, and because they changed so vastly from each other, finding solutions proved to be difficult. ## Accomplishments that we're proud of We're proud of attempting a full stack project because we don't have much experience and it's our first time making a project as complex as this. All of us are proud of the amount of information and coding that we've learned (ex. learning MongoDB, learning HTML). ## What we learned We learned a lot of new things. We ran into many challenges. We are very proud because learned 2 new coding languages, different methods of development, server side coding, and more. ## What's next for RecipEasy In the future, we want to make our program more brought and expand the database. We would also implement an algorithm to sort recipes in order from most relevant to least relevant and return those closer to the top. More restraints and fields such as dietary restrictions, cook time, and difficulty would also be added .
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
losing
## Inspiration for Creating sketch-it Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives. ## What it does Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time. ## How we built it On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server. On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image. ## Challenges we ran into We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively. ## Accomplishments that we're proud of Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎 ## What we learned We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future! ## What's next for sketch-it Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience!
## Inspiration In traditional finance, banks often swap cash flows from their assets for a fixed period of time. They do this because they want to hold onto their assets long-term, but believe their counter-party's assets will outperform their own in the short-term. We decided to port this over to DeFi, specifically Uniswap. ## What it does Our platform allows for the lending and renting of Uniswap v3 liquidity positions. Liquidity providers can lend out their positions for a short amount of time to renters, who are able to collect fees from the position for the duration of the rental. Lenders are able to both hold their positions long term AND receive short term cash flow in the form of a lump sum ETH which is paid upfront by the renter. Our platform handles the listing, selling and transferring of these NFTs, and uses a smart contract to encode the lease agreements. ## How we built it We used solidity and hardhat to develop and deploy the smart contract to the Rinkeby testnet. The frontend was done using web3.js and Angular. ## Challenges we ran into It was very difficult to lower our gas fees. We had to condense our smart contract and optimize our backend code for memory efficiency. Debugging was difficult as well, because EVM Error messages are less than clear. In order to test our code, we had to figure out how to deploy our contracts successfully, as well as how to interface with existing contracts on the network. This proved to be very challenging. ## Accomplishments that we're proud of We are proud that in the end after 16 hours of coding, we created a working application with a functional end-to-end full-stack renting experience. We allow users to connect their MetaMask wallet, list their assets for rent, remove unrented listings, rent assets from others, and collect fees from rented assets. To achieve this, we had to power through many bugs and unclear docs. ## What we learned We learned that Solidity is very hard. No wonder blockchain developers are in high demand. ## What's next for UniLend We hope to use funding from the Uniswap grants to accelerate product development and add more features in the future. These features would allow liquidity providers to swap yields from liquidity positions directly in addition to our current model of liquidity for lump-sums of ETH as well as a bidding system where listings can become auctions and lenders rent their liquidity to the highest bidder. We want to add different variable-yield assets to the renting platform. We also want to further optimize our code and increase security so that we can eventually go live on Ethereum Mainnet. We also want to map NFTs to real-world assets and enable the swapping and lending of those assets on our platform.
## Inspiration We had a wine and cheese last week. ## Challenges I ran into A W S + python 3 connecting to domain ## What's next for Whine and Cheese A team wine and cheese
winning
## Inspiration The Arduino community provides a full eco-system of developing systems, and I saw the potential in using hardware, IOT and cloud-integration to provide a unique solution for streamlining processes for business. ## What it does The web-app provides the workflow for a one-stop place to manage hundreds of different sensors by incorporating intelligence to each utility provided by the Arduino REST API. Imagine a health-care company that would need to manage all its heart-rate sensors and derive insights quickly and continuously on patient data. Or picture a way for a business to manage customer device location parameters by inputting customized conditions on the data or parameters. Or a way for a child to control her robot-controlled coffee machine from school. This app provides many different possibilities for use-cases. ## How we built it I connected iPhones to the Arduino cloud, and built a web-app with NodeJS that uses the Arduino IOT API to connect to the cloud, and connected MongoDB to make the app more efficient and scalable. I followed the CRM architecture to build the app, and implemented the best practices to keep scalability in mind, since it is the main focus of the app. ## Challenges we ran into A lot of the problems faced were naturally in the web application, and it required a lot of time. ## Accomplishments that we're proud of I are proud of the app and its usefulness in different contexts. This is a creative solution that could have real world uses if the intelligence is implemented carefully. ## What we learned I learned a LOT about web development, database management and API integration. ## What's next for OrangeBanana Provided we have more time, we would implement more sensors and more use-cases for handling each of these.
## Inspiration Our project was inspired by geocaching. Geocaching is the idea of crowdsourcing scavenger hunts where users can hide caches throughout a city and other users can go on adventures to discover them. Team: Dee Lucic Matt Mitchell Keshav Chawla Alexei dela Pena For this project we wanted to use the follow concepts: 1. The idea of scavenger hunts / finding things in the environment 2. Staying active and "taking care of yourself" to support Wealthsimple's motto 3. To go on an adventure, and to "push your limits" with Mountain Dew's motto 4. To use Indico image recognition API to recognize these scavenger hunt objects 5. Use Twitter to share photos of scavenger hunt objects ## What it does We have developed a marketing tool that is a Pepsi/Mountain Dew app that gives the users a list of tasks (scavenger hunt items) that the user needs to find in the environment. The tasks satisfy an active lifestyle like taking a picture of a ping pong ball. When the user finds the item, they take a picture of it and the app will verify if the item is correct through image recognition. If enough items are found, then the app will unlock a DewTheDew fridge, which will vend free Mountain Dew to the lucky winner. All photos are posted on Twitter under the hastag: DoForDew ## How I built it Our app is an iOS app built in Swift. It communicates with our local PHP webservice that is hosted on a laptop. The iOS app also tweets our photos. The PHP webservice calls python scripts that call the Indico image recognition API. The webservice also sends serial commands on the USB port to the fridge which contains an arduino. The arduino is programmed to process serial commands and control an analog servo motor that locks and unlocks the door of the fridge. ## Challenges I ran into We initially tried to communicate directly with the arduino from the iOS app using bluetooth but the arduino bluetooth libraries, and the iOS bluetooth libraries are complicated, and we would not have enough time to implement the code in time. ## Accomplishments that I'm proud of The app developer is new to iOS and Swift and we are happy with what we have created so far. The webservice is very powerful and it can handle different expansions if required. To make the fridge we had to go to a local hardware store and find the parts to create a working model in time. The challenge was difficult but we are happy with the progress so far. ## What I learned All the developers on our team are inexperienced and we are happy that we were able to integrate all these in just a day's worth of development. We learned how to make UIs in iOS, how to pass information over PHP, Serial communications, and REST APIs. ## What's next for DoTheDew HackTheFridge The next thing to do is to integrate bluetooth into the iOS app and set up a bluetooth shield on the arduino. Also, the Inico image recognition APIs can be called from the iOS app too. Once these two changes are made, then we will no longer require the webservice server and we can get rid of the laptop. That's the first step to creating a standalone product. After that we'll be able to create a real fridge.
## Inspiration: ## What it does:Creates an interactive playlist where users can request songs by communicating remotely with a dj ## How I built it: node.js, react, arduino, css, html, javascript, firebase ## Challenges I ran into: connecting arduino to database without wifi/bluetooth connection as well as some functionality issues of the webapp that persisted. ## Accomplishments that I'm proud of: successfully communicated between arduino and database, using new technology, learned more about react and css, and successfully utilizing spotify API ## What I learned: Hardware functionality as well as dynamic databases ## What's next for Aruba: location services
partial
## Inspiration When we heard about using food as a means of love and connection from Otsuka x VALUENEX’s Opening Ceremony presentation, our team was instantly inspired to create something that would connect Asian American Gen Z with our cultural roots and immigrant parents. Recently, there has been a surge of instant Asian food in American grocery stores. However, the love that exudes out of our mother’s piping hot dishes is irreplaceable, which is why it’s important for us, the loneliest demographic in the U.S., to cherish our immigrant parents’ traditional recipes. As Asian American Gen Z ourselves, we often fear losing out on beloved cultural dishes, as our parents have recipes ingrained in them out of years of repetition and thus, neglected documenting these precious recipes. As a result, many of us don’t have access to recreating these traditional dishes, so we wanted to create a web application that encourages sharing of traditional, cultural recipes from our immigrant parents to Asian American Gen Z. We hope that this will reinforce cross-generational relationships, alleviate feelings of disconnect and loneliness (especially in immigrant families), and preserve memories and traditions. ## What it does Through this web application, users have the option to browse through previews of traditional Asian recipes, posted by Asian or Asian American parents, featured on the landing page. If choosing to browse through, users can filter (by culture) through recipes to get closer to finding their perfect dish that reminds them of home. In the previews of the dishes, users will find the difficulty of the dish (via the number of knives – greater is more difficult), the cultural type of dish, and will also have the option to favorite/save a dish. Once they click on the preview of a dish, they will be greeted by an expanded version of the recipe, featuring the name and image of the dish, ingredients, and instructions on how to prepare and cook this dish. For users that want to add recipes to *yumma*, they can utilize a modal box and input various details about the dish. Additionally, users can also supplement their recipes with stories about the meaning behind each dish, sparking warm memories that will last forever. ## How we built it We built *yumma* using ReactJS as our frontend, Convex as our backend (made easy!), Material UI for the modal component, CSS for styling, GitHub to manage our version set, a lot of helpful tips and guidance from mentors and sponsors (♡), a lot of hydration from Pocari Sweat (♡), and a lot of love from puppies (♡). ## Challenges we ran into Since we were all relatively beginners in programming, we initially struggled with simply being able to bring our ideas to life through successful, bug-free implementation. We turned to a lot of experienced React mentors and sponsors (shoutout to Convex) for assistance in debugging. We truly believe that learning from such experienced and friendly individuals was one of the biggest and most valuable takeaways from this hackathon. We additionally struggled with styling because we were incredibly ambitious with our design and wanted to create a high-fidelity functioning app, however HTML/CSS styling can take large amounts of time when you barely know what a flex box is. Additionally, we also struggled heavily with getting our app to function due to one of its main features being in a popup menu (Modal from material UI). We worked around this by creating an extra button in order for us to accomplish the functionality we needed. ## Accomplishments that we're proud of This is all of our first hackathon! All of us also only recently started getting into app development, and each has around a year or less of experience–so this was kind of a big deal to each of us. We were excitedly anticipating the challenge of starting something new from the ground up. While we were not expecting to even be able to submit a working app, we ended up accomplishing some of our key functionality and creating high fidelity designs. Not only that, but each and every one of us got to explore interests we didn’t even know we had. We are not only proud of our hard work in actually making this app come to fruition, but that we were all so open to putting ourselves out of our comfort zone and realizing our passions for these new endeavors. We tried new tools, practiced new skills, and pushed our necks to the most physical strain they could handle. Another accomplishment that we were proud of is simply the fact that we never gave up. It could have been very easy to shut our laptops and run around the Main Quadrangle, but our personal ties and passion for this project kept us going. ## What we learned On the technical side, Erin and Kaylee learned how to use Convex for the first time (woo!) and learned how to work with components they never knew could exist, while Megan tried her hand for the first time at React and CSS while coming up with some stellar wireframes. Galen was a double threat, going back to her roots as a designer while helping us develop our display component. Beyond those skills, our team was able to connect with some of the company sponsors and reinvigorate our passions on why we chose to go down the path of technology and development in the first place. We also learned more about ourselves–our interests, our strengths, and our ability to connect with each other through this unique struggle. ## What's next for yumma Adding the option to upload private recipes that can only be visible to you and any other user you invite to view it (so that your Ba Ngoai–grandma’s—recipes stay a family secret!) Adding more dropdown features to the input fields so that some will be easier and quicker to use A messaging feature where you can talk to other users and connect with them, so that cooking meetups can happen and you can share this part of your identity with others Allowing users to upload photos of what they make from recipes they make and post them, where the most recent of photos for each recipe will be displayed as part of a carousel on each recipe component. An ingredients list that users can edit to keep track of things they want to grocery shop for while browsing
# Omakase *"I'll leave it up to you"* ## Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? ## What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. ## What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. ## How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. ## What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. ## Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. ## Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
## Inspiration Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops. ## What it does EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app. -The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations. -The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa). -The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique. -The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers. EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time. ## How we built it We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js. ## Challenges we ran into We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data. We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults. ## Accomplishments that we're proud of EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can. ## What we learned 1. Get a better String class than the default Arduino one. 2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability. ## What's next for EDT - A Lightweight Tablet for Education More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything.
partial
## Inspiration We were inspired by Duolingo which makes learning languages fun! Physics is often a difficult subject for many students, so we wanted to revisit some concepts that would incorporate the physics curriculum in a fun way using simulation 🤓 ## What it does Our web app has a dashboard of user-interactive simulations that act as physics lessons. We wanted to make online physics simulations more interesting, so we implemented a working fruit-themed projectile motion game that allows you to change the angle and speed of a tomato which splatters when it falls. ## How we built it We built Galileo using Matter.js, a physics javascript library, HTML, CSS, frameworks such as Astro and Tailwind CSS. Figma was used for wireframing, planning, as well as prototyping. We used an Astro template to get our landing page started. ## Challenges we ran into Implementing Matter.js and figuring out how to make physics interesting to learn. We also struggled with combining logic and the Astro templates. ## Accomplishments that we're proud of Our newest member was a first year engineering student and a hackathon beginner 🤫 🧏‍♂️ 🧏‍♂️ We also all tried something outside of our comfort zones and learned a lot from eachother! ## What we learned SO proud of our tomatos 🍅 ❤️ Using Javascript libraries, physics rules and logic in coding ## What's next for Galileo (Dynasim) Potentially creating more games and trying more physics libraries. Adding a review feature.
[Play The Game](https://gotm.io/askstudio/pandemic-hero) ## Inspiration Our inspiration comes from the concern of **misinformation** surrounding **COVID-19 Vaccines** in these challenging times. As students, not only do we love to learn, but we also yearn to share the gifts of our knowledge and creativity with the world. We recognize that a fun and interactive way to learn crucial information related to STEM and current events is rare. Therefore we aim to give anyone this opportunity using the product we have developed. ## What it does In the past 24 hours, we have developed a pixel art RPG game. In this game, the user becomes a scientist who has experienced the tragedies of COVID-19 and is determined to find a solution. Become the **Hero of the Pandemic** through overcoming the challenging puzzles that give you a general understanding of the Pfizer-BioNTech vaccine's development process, myths, and side effects. Immerse yourself in the original artwork and touching story-line. At the end, complete a short feedback survey and get an immediate analysis of your responses through our **Machine Learning Model** and receive additional learning resources tailored to your experience to further your knowledge and curiosity about COVID-19. Team A.S.K. hopes that through this game, you become further educated by the knowledge you attain and inspired by your potential for growth when challenged. ## How I built it We built this game primarily using the Godot Game Engine, a cross-platform open-source game engine that provides the design tools and interfaces to create games. This engine uses mostly GDScript, a python-like dynamically typed language designed explicitly for design in the Godot Engine. We chose Godot to ease cross-platform support using the OpenGL API and GDScript, a relatively more programmer-friendly language. We started off using **Figma** to plan out and identify a theme based on type and colour. Afterwards, we separated components into groupings that maintain similar characteristics such as label outlining and movable objects with no outlines. Finally, as we discussed new designs, we added them to our pre-made categories to create a consistent user-experience-driven UI. Our Machine Learning model is a content-based recommendation system built with Scikit-learn, which works with data that users provide implicitly through a brief feedback survey at the end of the game. Additionally, we made a server using the Flask framework to serve our model. ## Challenges I ran into Our first significant challenge was navigating through the plethora of game features possible with GDScript and continually referring to the documentation. Although Godot is heavily documented, as an open-source engine, there exist frequent bugs with rendering, layering, event handling, and more that we creatively overcame A prevalent design challenge was learning and creating pixel art with the time constraint in mind. To accomplish this, we methodically used as many shortcuts and tools as possible to copy/paste or select repetitive sections. Additionally, incorporating Machine Learning in our project was a challenge in itself. Also, sending requests, display JSON, and making the recommendations selectable were considerable challenges using Godot and GDScript. Finally, the biggest challenge of game development for our team was **UX-driven** considerations to find a balance between a fun, challenging puzzle game and an educational experience that leaves some form of an impact on the player. Brainstorming and continuously modifying the story-line while implementing the animations using Godot required a lot of adaptability and creativity. ## Accomplishments that I'm proud of We are incredibly proud of our ability to bring our past experiences gaming into the development process and incorporating modifications of our favourite gaming memories. The development process was exhilarating and brought the team down the path of nostalgia which dramatically increased our motivation. We are also impressed by our teamwork and team chemistry, which allowed us to divide tasks efficiently and incorporate all the original artwork designs into the game with only a few hiccups. We accomplished so much more within the time constraint than we thought, such as training our machine learning model (although with limited data), getting a server running up and quickly, and designing an entirely original pixel art concept for the game. ## What I learned As a team, we learned the benefit of incorporating software development processes such as **Agile Software Development Cycle.** We solely focused on specific software development stages chronologically while returning and adapting to changes as they come along. The Agile Process allowed us to maximize our efficiency and organization while minimizing forgotten tasks or leftover bugs. Also, we learned to use entirely new software, languages, and skills such as Godot, GDScript, pixel art, and design and evaluation measurements for a serious game. Finally, by implementing a Machine Learning model to analyze and provide tailored suggestions to users, we learned the importance of a great dataset. Following **Scikit-learn** model selection graph or using any cross-validation techniques are ineffective without the data set as a foundation. The structure of data is equally important to manipulate the datasets based on task requirements to increase the model's score. ## What's next for Pandemic Hero We hope to continue developing **Pandemic Hero** to become an educational game that supports various age ranges and is worthy of distribution among school districts. Our goal is to teach as many people about the already-coming COVID-19 vaccine and inspire students everywhere to interpret STEM in a fun and intuitive manner. We aim to find support from **mentors** along the way, who can help us understand better game development and education practices that will propel the game into a deployment-ready product. ### Use the gotm.io link below to play the game on your browser or follow the instructions on Github to run the game using Godot
## Inspiration During the pandemic, we found ourselves sitting down all day long in a chair, staring into our screens and stagnating away. We wanted a way for people to get their blood rushing and have fun with a short but simple game. Since we were interested in getting into Augmented Reality (AR) apps, we thought it would be perfect to have a game where the player has to actively move a part of their body around to dodge something you see on the screen, and thus Splatt was born! ## What it does All one needs is a browser and a webcam to start playing the game! The goal is to dodge falling barrels and incoming cannonballs with your head, but you can also use your hands to "cut" down the projectiles (you'll still lose partial lives, so don't overuse your hand!). ## How we built it We built the game using JavaScript, React, Tensorflow, and WebGL2. Horace worked on the 2D physics, getting the projectiles to fall and be thrown around, as well as working on the hand tracking. Thomas worked on the head tracking using Tensorflow and outputting the necessary values we needed to be able to implement collision, as well as the basic game menu. Lawrence worked on connecting the projectile physics and the head/hand tracking together to ensure proper collision could be detected, as well as restructuring the app to be more optimized than before. ## Challenges we ran into It was difficult getting both the projectiles and the head/hand from the video on the same layer - we had initially used two separate canvasses for this, but we quickly realized it would be difficult to communicate from one canvas to another without causing too many rerenders. We ended up using a single canvas and after adjusting how we retrieved the coordinates of the projectiles and the head/hand, we were able to get collisions to work. ## Accomplishments that we're proud of We're proud about how we divvy'd up the work and were able to connect everything together to get a working game. During the process of making the game, we were excited to have been able to get collisions working, since that was the biggest part to make our game complete. ## What we learned We learned more about implementing 2D physics in JavaScript, how we could use Tensorflow to create AR apps, and a little bit of machine learning through that. ## What's next for Splatt * Improving the UI for the game * Difficulty progression (1 barrel, then 2 barrels, then 2 barrels and 1 cannonball, and so forth)
winning
## Inspiration Keep yourself hydrated! ## What it does Tracks the amount of water you drank. ## How we built it We had a Bluetooth connected to the Arduino computer and sent the collected data through the app. ## Challenges we ran into The initial setup was quite difficult, we weren't sure what tools we were going to use as finding the limitations of these tools could be quite challenging and unexpected. ## Accomplishments that we're proud of We have successfully managed to combine mechanical, electrical, and software aspect of the project. We are also proud that different university students from different cultural/technical background has gathered up as a team and successfully initiated the project. ## What we learned We have learned how to maximize the usage of various tools, such as how to calculate the amount of water has been traveled through the tube using the volumetric flow rate and time taken, how to send sensor data to the app, and how to build an app that receives such data while providing intuitive user experience. ## What's next for Hydr8 Smaller component to fit in a bottle, more sensors to increase the accuracy of the tracked data. More integration with the app would also be a huge improvement.
## Inspiration [www.loolooloo.co](http://www.loolooloo.co) is inspired by the famous Waterloo chant—Water Water Water! Loo Loo Loo! And, of course, the natural consequence of drinking lots of water, water, water, being a visit to the loo, loo, loo! ## What it does [www.loolooloo.co](http://www.loolooloo.co) detects when you go to a water fountain, and sends you a text message with a custom link to an interactive map that gives you directions to the nearest bathroom. It also has a daily water tracker that incentivizes you to stay hydrated. ## How we built it We utilized the MappedIn API to implement an indoor navigation system for building maps. The front-end was developed using React, while the back-end was powered by Node.js. Twilio's API was integrated to enable SMS notifications with custom map links. ## Challenges we ran into Deploying a Bluetooth beacon to detect user proximity and send real-time requests to an HTTPS port proved to be a complex task, particularly in managing secure communications and ensuring reliable detection within a defined radius. ## Accomplishments that we're proud of We successfully built a fully functional system where the front and back ends work seamlessly together, delivering a real-time user experience by sending SMS directions to the nearest restroom whenever a user approaches a water fountain. ## What we learned We gained valuable experience in integrating proximity detection systems with cloud services, handling real-time data, and optimizing our full stack application for responsive and secure user interactions. ## What's next for [www.loolooloo.co](http://www.loolooloo.co) Future plans include enhancing the system's accuracy, and making the UI more user friendly.
## Inspiration Britafull is inspired by the ubiquitous student experience of living with people who may drive us crazy. Some roommates never wash their dishes, some don't take out the trash, but most insidious and aggravating of all... not refilling the Brita!! It's the little things that motivate you to finally get your own place. According to our ***highly*** scientific research, 80% of students face the **SAME** problem! ## What it does Britafull detects when the Brita's water levels get below a certain point using weight. The next person to grab the Brita and empties it to that point has seconds before the speaker *kindly* reminds them to be a good roommate and fill up that Brita. If that's not incentive enough, waiting even longer triggers a text message reminder. That's right, the whole group chat knows now. Call it petty, but here at Britafull, we get results. ## How we built it We used force sensing resistors to detect changes in force accounting for the presence or absence of the Brita. This information (analog) is passed to an Arduino which converts to a digital signal that is sent to our Raspberry Pi. We programmed primarily in Python and implemented functions to check when the Brita is empty based on the input data; if it is, we then trigger an alarm which prompts the user to refill. Lastly, if the Brita remains unfilled, we use Twilio to send a message to the entire roommate group chat that someone needs to get on their Brita game! ## Challenges we ran into As with many hardware hacks, our challenges were related to finding the right parts to use, and connecting the Raspberry Pi, Arduino, and our personal computers! Once we gathered the resistors that were compatible with the sensors to take the measurements we needed, we ran into compatibility problems that came from running Twilio's API with the rest of our backend interface. Also, we needed a Brita (generously provided by a fellow hacker)!! When working with Force-Sensing Resistors to obtain the weight of the Brita, we found that results often fluctuated as the water and sensor settled alike. To counter this, we found that gathering data across several seconds in 500ms intervals gave us a reliable set of data points, which we cleaned up with numpy for further accuracy. ## Accomplishments that we're proud of This was many of our first hackathons and first experiences doing a Hardware Hack! We are proud of ourselves for championing through the in-person experience and coming up with such a funny yet overwhelmingly practical product. ## What we learned For those of us without hardware experience, we learned how hardware-software system interfaces work! We also learned that the student housing experience is, indeed, universal. ## What's next for Britafull Britafull will never stop finding ways to manage your household and roommate problems for you. One way we could advance our platform is by implementing a machine learning algorithm that identifies the roommate in violation of the first roommate commandment: thou shalt refill the Brita as SOON AS IT'S EMPTY. In our pursuit of ensuring nonconfrontational hydration for all, we created a product mockup of Britafull should it ever hit the market. Featuring strong ABS plastic, inductive charging, force-sensing resistors, thin piezoelectric speakers, and ultrathin lithium batteries, we made sure that it would fit in with your daily routine and kitchen aesthetic without a hitch!
partial
## Inspiration The inspiration came from the desire to learn about sophisticated software without the massive financial burden that comes with premium hardware in drones. So the question arose, what if the software could be utilized on any drone and made it available open source. ## What it does The software allows any drone to track facial movement and hand gestures and to fly and move keeping the user in the center of the frame, this can be utilized at multiple different levels! We aim out technology to help and develop the experience of photographers with the hands-off control, and decrease the barrier to entry to drone by making it simpler to use. ## How we built it We mainly used python with the help of libraries and frameworks like PyTorch, YoloV8, MediaPipe, OpenCV, tkinter, PIL, DJITello etc. ## Challenges we ran into While implementing hand-gesture commands, we had a setback and faced an unsolved problem (yet). The integration between face recognition, hand recognition, drone functions etc. was harder than we anticipated to since it had a lot of moving parts that we needed to connect. None of us had any UI experience so creating the interface was a challenge too. ## Accomplishments that we're proud of We have implemented pivot-tracking and move-tracking features. Pivot-tracking allows user to make the drone stationary while turning its axis to follow the user. Move-tracking is basically having your drone on a hands-free leash (it follows you anywhere you go)! We implemented a accurate hand gesture recognition, although we are yet to implemented new functions attached to the gestures. A lot of the framework was brand new information to us but we were still able to learn it and create a functional software. ## What we learned Understanding project scope and what can be done in limited time was an important lesson for us that we will definitely take moving forward. We learnt a lot of new frameworks like MediaPipe, YoloV8, DJITello, thinkter, PIL ## What's next for Open Droid Adding functions attached to the hand gestures, adding a sling shot feature etc. Since our hand recognition software can detect two hands, the left hand will control the mod of the drone (low fly, high fly, slow fly, fast fly, default) and the right hand will control functions (go back, come closer, circle around me, slingshot, land on my hand etc.) After accomplishing these goals we would like to make the software more user friendly and open source!
## Inspiration: As a group of 4 people who met each other for the first time, we saw this event as an inspiring opportunity to learn new technology and face challenges that we were wholly unfamiliar with. Although intuitive when combined, every feature of this project was a distant puzzle piece of our minds that has been collaboratively brought together to create the puzzle you see today over the past three days. Our inspiration was not solely based upon relying on the minimum viable product; we strived to work on any creative idea sitting in the corner of our minds, anticipating its time to shine. As a result of this incredible yet elusive strategy, we were able to bring this idea to action and customize countless features in the most innovative and enabling ways possible. ## Purpose: This project involves almost every technology we could possibly work with - and even not work with! Per the previous work experience of Laurance and Ian in the drone sector, both from a commercial and a developer standpoint, our project’s principal axis revolved around drones and their limitations. We improved and implemented features that previously seemed to be the limitations of drones. Gesture control and speech recognition were the main features created, designed to empower users with the ability to seamlessly control the drone. Due to the high threshold commonly found within controllers, many people struggle to control drones properly in tight areas. This can result in physical, mental, material, and environmental damages which are harmful to the development of humans. Laurence was handling all the events at the back end by using web sockets, implementing gesture controllers, and adding speech-to-text commands. As another aspect of the project, we tried to add value to the drone by designing 3D-printed payload mounts using SolidWorks and paying increased attention to detail. It was essential for our measurements to be as exact as possible to reduce errors when 3D printing. The servo motors mount onto the payload mount and deploy the payload by moving its shaft. This innovation allows the drone to drop packages, just as we initially calculated in our 11th-grade physics classes. As using drones for mailing purposes was not our first intention, our main idea continuously evolved around building something even more mind-blowing - innovation! We did not stop! :D ## How We Built it? The prototype started in small but working pieces. Every person was working on something related to their interests and strengths to let their imaginations bloom. Kevin was working on programming with the DJI Tello SDK to integrate the decisions made by the API into actual drone movements. The vital software integration to make the drone work was tested and stabilized by Kevin. Additionally, he iteratively worked on designing the mount to perfectly fit onto the drone and helped out with hardware issues. Ian was responsible for setting up the camera streaming. He set up the MONA Server and broadcast the drone through an RTSP protocol to obtain photos. We had to code an iterative python script that automatically takes a screenshot every few seconds. Moreover, he worked toward making the board static until it received a Bluetooth signal from the laptop. At the next step, it activated the Servo motor and pump. But how does the drone know what it knows? The drone is able to recognize fire with almost 97% accuracy through deep learning. Paniz was responsible for training the CNN model for image classification between non-fire and fire pictures. The model has been registered and ready for use to receive data from the drone to detect fire. Challenges we ran into: There were many challenges that we faced and had to find a way around them in order to make the features work together as a system. Our most significant challenge was the lack of cross-compatibility between software, libraries, modules, and networks. As an example, Kevin had to find an alternative path to connect the drone to the laptop since the UDP network protocol was unresponsive. Moreover, he had to investigate gesture integration with drones during this first prototype testing. On the other hand, Ian struggled to connect the different sensors to the drone due to their heavy weight. Moreover, the hardware compatibility called for deep analysis and research since the source of error was unresolved. Laurence was responsible for bringing all the pieces together and integrating them through each feature individually. He was successful not only through his technical proficiencies but also through continuous integration - another main challenge that he resolved. Moreover, the connection between gesture movement and drone movement due to responsiveness was another main challenge that he faced. Data collection was another main challenge our team faced due to an insufficient amount of proper datasets for fire. Inadequate library and software versions and the incompatibility of virtual and local environments led us to migrate the project from local completion to cloud servers. ## Things we have learned: Almost every one of us had to work with at least one new technology such as the DJI SDK, New Senos Modulos, and Python packages. This project helped us to earn new skills in a short amount of time with a maximized focus on productivity :D As we ran into different challenges, we learned from our mistakes and tried to eliminate repetitive mistakes as much as possible, one after another. ## What is next for Fire Away? Although we weren't able to fully develop all of our ideas here are some future adventures we have planned for Fire Away : Scrubbing Twitter for user entries indicating a potential nearby fire. Using Cohere APIs for fluent user speech recognition Further develop and improve the deep learning algorithm to handle of variety of natural disasters
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
losing
MediBot: Help us help you get the healthcare you deserve ## Inspiration: Our team went into the ideation phase of Treehacks 2023 with the rising relevance and apparency of conversational AI as a “fresh” topic occupying our minds. We wondered if and how we can apply conversational AI technology such as chatbots to benefit people, especially those who may be underprivileged or underserviced in several areas that are within the potential influence of this technology. We were brooding over the six tracks and various sponsor rewards when inspiration struck. We wanted to make a chatbot within healthcare, specifically patient safety. Being international students, we recognize some of the difficulties that arise when living in a foreign country in terms of language and the ability to communicate with others. Through this empathetic process, we arrived at a group that we defined as the target audience of MediBot; children and non-native English speakers who face language barriers and interpretive difficulties in their communication with healthcare professionals. We realized very early on that we do not want to replace the doctor in diagnosis but rather equip our target audience with the ability to express their symptoms clearly and accurately. After some deliberation, we decided that the optimal method to accomplish that using conversational AI was through implementing a chatbot that asks clarifying questions to help label the symptoms for the users. ## What it does: Medibot initially prompts users to describe their symptoms as best as they can. The description is then evaluated to compare to a list of proper medical terms (symptoms) in terms of similarity. Suppose those symptom descriptions are rather vague (do not match very well with the list of official symptoms or are blanket terms). In that case, Medibot asks the patients clarifying questions to identify the symptoms with the user’s added input. For example, when told, “My head hurts,” Medibot will ask them to distinguish between headaches, migraines, or potentially blunt force trauma. But if the descriptions of a symptom are specific and relatable to official medical terms, Medibot asks them questions regarding associated symptoms. This means Medibot presents questions inquiring about symptoms that are known probabilistically to appear with the ones the user has already listed. The bot is designed to avoid making an initial diagnosis using a double-blind inquiry process to control for potential confirmation biases. This means the bot will not tell the doctor its predictions regarding what the user has, and it will not nudge the users into confessing or agreeing to a symptom they do not experience. Instead, the doctor will be given a list of what the user was likely describing at the end of the conversation between the bot and the user. The predictions from the inquiring process are a product of the consideration of associative relationships among symptoms. Medibot keeps track of the associative relationship through Cosine Similarity and weight distribution after the Vectorization Encoding Process. Over time, Medibot zones in on a specific condition (determined by the highest possible similarity score). The process also helps in maintaining context throughout the chat conversations. Finally, the conversation between the patient and Medibot ends in the following cases: the user needs to leave, the associative symptoms process suspects one condition much more than the others, and the user finishes discussing all symptoms they experienced. ## How we built it We constructed the MediBot web application in two different and interconnected stages, frontend and backend. The front end is a mix of ReactJS and HTML. There is only one page accessible to the user which is a chat page between the user and the bot. The page was made reactive through several styling options and the usage of states in the messages. The back end was constructed using Python, Flask, and machine learning models such as OpenAI and Hugging Face. The Flask was used in communicating between the varying python scripts holding the MediBot response model and the chat page in the front end. Python was the language used to process the data, encode the NLP models and their calls, and store and export responses. We used prompt engineering through OpenAI to train a model to ask clarifying questions and perform sentiment analysis on user responses. Hugging Face was used to create an NLP model that runs a similarity check between the user input of symptoms and the official list of symptoms. ## Challenges we ran into Our first challenge was familiarizing ourselves with virtual environments and solving dependency errors when pushing and pulling from GitHub. Each of us initially had different versions of Python and operating systems. We quickly realized that this will hinder our progress greatly after fixing the first series of dependency issues and started coding in virtual environments as solutions. The second great challenge we ran into was integrating the three separate NLP models into one application. This is because they are all resource intensive in terms of ram and we only had computers with around 12GB free for coding. To circumvent this we had to employ intermediate steps when feeding the result from one model into the other and so on. Finally, the third major challenge was resting and sleeping well. ## Accomplishments we are proud of First and foremost we are proud of the fact that we have a functioning chatbot that accomplishes what we originally set out to do. In this group 3 of us have never coded an NLP model and the last has only coded smaller scale ones. Thus the integration of 3 of them into one chatbot with front end and back end is something that we are proud to have accomplished in the timespan of the hackathon. Second, we are happy to have a relatively small error rate in our model. We informally tested it with varied prompts and performed within expectations every time. ## What we learned: This was the first hackathon for half of the team, and for 3/4, it was the first time working with virtual environments and collaborating using Git. We learned quickly how to push and pull and how to commit changes. Before the hackathon, only one of us had worked on an ML model, but we learned together to create NLP models and use OpenAI and prompt engineering (credits to OpenAI Mem workshop). This project's scale helped us understand these ML models' intrinsic moldability. Working on Medibot also helped us become much more familiar with the idiosyncrasies of ReactJS and its application in tandem with Flask for dynamically changing webpages. As mostly beginners, we experienced our first true taste of product ideation, project management, and collaborative coding environments. ## What’s next for MediBot The next immediate steps for MediBot involve making the application more robust and capable. In more detail, first we will encode the ability for MediBot to detect and define more complex language in simpler terms. Second, we will improve upon the initial response to allow for more substantial multi-symptom functionality.Third, we will expand upon the processing of qualitative answers from users to include information like length of pain, the intensity of pain, and so on. Finally, after this more robust system is implemented, we will begin the training phase by speaking to healthcare providers and testing it out on volunteers. ## Ethics: Our design aims to improve patients’ healthcare experience towards the better and bridge the gap between a condition and getting the desired treatment. We believe expression barriers and technical knowledge should not be missing stones in that bridge. The ethics of our design therefore hinges around providing quality healthcare for all. We intentionally stopped short of providing a diagnosis with Medibot because of the following ethical considerations: * **Bias Mitigation:** Whatever diagnosis we provide might induce unconscious biases like confirmation or availability bias, affecting the medical provider’s ability to give proper diagnosis. It must be noted however, that Medibot is capable of producing diagnosis. Perhaps, Medibot can be used in further research to ensure the credibility of AI diagnosis by checking its prediction against the doctor’s after diagnosis has been made. * **Patient trust and safety:** We’re not yet at the point in our civilization’s history where patients are comfortable getting diagnosis from AIs. Medibot’s intent is to help nudge us a step down that path, by seamlessly, safely, and without negative consequence integrating AI within the more physical, intimate environments of healthcare. We envision Medibot in these hospital spaces, helping users articulate their symptoms better without fear of getting a wrong diagnosis. We’re humans, we like when someone gets us, even if that someone is artificial. However, the implementation of AI for pre-diagnoses still raises many ethical questions and considerations: * **Fairness:** Use of Medibot requires a working knowledge of the English language. This automatically disproportionates its accessibility. There are still many immigrants for whom the questions, as simple as we have tried to make them, might be too much for. This is a severe limitation to our ethics of assisting these people. A next step might include introducing further explanation of troublesome terms in their language (Note: the process of pre-diagnosis will remain in English, only troublesome terms that the user cannot understand in English may be explained in a more familiar language. This way we further build patients’ vocabulary and help their familiarity with English ). There are also accessibility concerns as hospitals in certain regions or economic stratas may not have the resources to incorporate this technology. * **Bias:** We put severe thought into bias mitigation both on the side of the doctor and the patient. It is important to ensure that Medibot does not lead the patient into reporting symptoms they don’t necessarily have or induce availability bias. We aimed to circumvent this by asking questions seemingly randomly from a list of symptoms generated based on our Sentence Similarity model. This avoids leading the user in just one direction. However, this does not eradicate all biases as associative symptoms are hard to mask from the patient (i.e a patient may think chills if you ask about cold) so this remains a consideration. * **Accountability:** Errors in symptom identification can be tricky to detect making it very hard for the medical practitioner to know when the symptoms are a true reflection of the actual patient’s state. Who is responsible for the consequences of wrong pre-diagnoses? It is important to establish these clear systems of accountability and checks for detecting and improving errors in MediBot. * **Privacy:** MediBot will be trained on patient data and patient-doctor diagnoses in future operations. There remains concerns about privacy and data protection. This information, especially identifying information, must be kept confidential and secure. One method of handling this is asking users at the very beginning whether they want their data to be used for diagnostics and training or not.
## Inspiration Too many impersonal doctor's office experiences, combined with the love of technology and a desire to aid the healthcare industry. ## What it does Takes a conversation between a patient and a doctor and analyzes all symptoms mentioned in the conversation to improve diagnosis. Ensures the doctor will not have to transcribe the interaction and can focus on the patient for more accurate, timely and personal care. ## How we built it Ruby on Rails for the structure with a little bit of React. Bayesian Classification procedures for the natural language processing. ## Challenges we ran into Working in a noisy environment was difficult considering the audio data that we needed to process repeatedly to test our project. ## Accomplishments that we're proud of Getting keywords, including negatives, to match up in our natural language processor. ## What we learned How difficult natural language processing is and all of the minute challenges with a machine understanding humans. ## What's next for Pegasus Turning it into a virtual doctor that can predict illnesses using machine learning and experience with human doctors.
## Inspiration There is no product out there, and all the features mentioned are implementable in all legal ways as well and can be revolutionary for Canada. ## What it does The app allows the user to send money to other people, but not via ordinary bank to bank but with crypto to interac(which goes to bank), this app also serve as an investment app because it allows user to hold onto crypto as it will be a wallet as well. Plus this app also allows to take payments through QR code, which will be both beneficial for suppliers as well as customers, because suppliers can have the payment in both crypto and interac. ## How we built it Flutter, Figma, Dart, Firebase ## Challenges we ran into As this idea is revolutionary and no one has ever built something like this before, there are no resources for this, not even a API. ## Accomplishments that we're proud of As we have no access to any API or anything related to our project we built the app in protype version, because we actually want to take this project to the next level, and we managed to create both prototypes for Mobile App and Web App. ## What we learned Amazing ways to monetize a business and working with new people can be amazing and you can become great friends. ## What's next for GroPay We will try to get funding for this project as this project just cannot be in function without it, and take it to the next level.
partial
# CodaCat ## How to run: * Clone the package * Pull from anas-dev and dev to get all the packages * Run **npm run postinstall** of the root to install dependencies of all subclasses * Navigate to each exercise and run command **npm run start** ## CodaCat is the perfect place for children to learn about coding with Coda in a simple and fun way. It includes many levels with multiple concepts to help children learn to code from a young age. We were inspired by the teacher's pet and the warm and fuzzy categories to create something cute and easy to pick up for children. ### There are five main concepts that are included in the game each having levels with varying difficulties: ### Inputting and Outputting A task where the player has to make the cat get onto the screen by simply using .show(). A small interaction between Coda and the player where the player has to output a greeting and receives a greeting back from Coda. An assignment that includes positions and the interaction with the cursor and mouse. The player has to code the position of the mouse and this makes the eyes of the cat look in that direction when the mouse key is pressed. ### If and Else Statements For this task, the player has to create if statements to create a game to find where Coda is hiding on the screen. The player has to change the volume of a 'meow' that occurs based on the distance away from Coda so that it gets louder as you get closer. This assignment will include Coda walking back and forth from left to right on the screen. The player has to create if statements for when Coda reaches the right end of the screen to change directions and walk the opposite way and also do the same when reaching the left end. ### Loop Structures The player creates a program with a for loop that chooses the number of cats on each row to be outputted on the screen. ### Arrays Cota will be showing off his dance move and the player's job is to understand the idea of arrays by creating and choosing the order of the dance moves. The dance moves will then be repeated based on the order created by the player. The player has to do a slightly more complicated code which includes a bubble sort. There will be an array of cats of various heights and the job of the player is to order the cats from shortest to tallest from left to right. This will show the cats moving around to be organized and the idea of a bubble sort. ### Functions Coda and his friends are hungry for food. The player's job is to create a function that receives the number of cats and gives each of them food to be happy and healthy. ### Others Flocking Concept - The cats will be running for food in a certain position on the edge of the screen. All the cats will be trying to eat the food and go in groups, but also do not want to collide into each other. The player has to figure out the mechanism and make it work? not sure how this part works completely.
## Inspiration All students have been in a situation where they’re reviewing lecture slides and simply cannot follow along. Grasping lecture material, particularly for lessons loaded with abstract visual models, can be difficult without an auditory aid. *That’s why EduLecture.ai is here to help.* By increasing the accessibility of remote education, EduLecture.ai boosts student comprehension and engagement. Students remain more engaged when hearing a lecture in a familiar teacher’s voice, resulting in improved academic performance. ## What it does EduLecture.ai is a website that allows educators to seamlessly generate a video presentation of their lecture. Using AI and machine learning, EduLecture.ai accepts a short audio sample of the educator’s voice and their Google Slides lecture, accompanied by slide notes, to create a video lecture narrated in the educator’s unique voice. These videos are then uploaded and saved to the educator’s online profile, on which students can easily access their educator's content, whether it's for supplementary review material, makeup work, or much more. ## How we built it EduLecture.ai was built using a combination of React, Django, and Python. In order to build the frontend for EduLecture.ai, we used React.js to easily integrate interactions with the API. For the backend, we coded a server using Django Rest Framework for the API endpoints that the frontend called. On the backend, we used different Python libraries for the voice-generative AI tooling that lived in the Django server. ## Challenges we ran into Some of the major challenges we ran into included issues with authenticating the Google Cloud Platform Project, as well as creating an authentication system between Django and React. It was our first time using Django and React together, so there was quite a bit to learn there. We also ran into additional issues with managing files on the Django server for generating the AI-narrated video. ## Accomplishments that we're proud of We’re proud of the quick timing it takes to process the input lecture and convert it into a functional, professional video. We’re also proud of the concept and potential of our idea, as it’s a novel application of cutting-edge AI technology that truly addresses issues of accessibility for students and teachers alike. ## What we learned We learned how to create a website using React and Django together. We also learned how to set up a Google Cloud and work with Google APIs and authentication. Finally, we learned how to integrate AI tools into the full-stack framework development pipeline. ## What's next for EduLecture.ai We aim to expand our features and implement a premium plan for users. Educators without the premium plan will still be able to access our basic functionalities. In addition, subscribers to the premium plan will have more advanced features such as translation, collaboration with other users, auto-generation of thorough lecture notes from sparse bulleted notes, and more. ## Google Cloud We have registered under the domain edulectureai.com. ## Contact us Here are our Discord handles if you would like to contact us! Anthony Zhai: antz22 Emily Zhang: gudetamaluvr Ruoming Shen: ruomingshen
## Inspiration The inspiration behind this page, is truly driven by 'Onefan' itself which has been a true labour of love and devotion for about 10 months and basically 'Onefan' is a site that aims to give people amazing conversations with a unique chat UI by using an algorithm to match you to someone that has the same interest as you in a Tv Show(more kinds of content coming soon). The reason for the cleanly designed UI on the beta sign up page is because I had an old sign up page that was incredibly mediocre, so I looked at a problem and said what can I do to improve it, and solve this problem, I always try to strive for better. This beta sign up page was only about 1/3 created before hack western but now it is all done! ## What it does This page simply uses firebase, JSON/JS, and email Js to get data package it in a JSON and push it to a server so I know who signed up for the beta , It also uses a clever js library email Js and some tricky code to email the person who just signed up. ## How I built it Just the regular web cocktail of: * JS * CSS * HTML as well as some external API's: * Firebase * emailJs ## Challenges I ran into * The JSON object of the user data having A-Synchronous call-back issues. * Working with the emailJs library. * [unrelated] But creating the match making algorithm for the website. ## Accomplishments that I'm proud of * The clean and minimalistic/functional UI ## What I learned + how to send parsed JSON to firebase quickly ## What's next for Onefan Beta Page * Once the site launches this page will have already served its purpose :).
partial
## Inspiration We wanted to combine our interest in music and object detection software. ## What it does The wavechord detects your hand position above the Kinect and maps your position to a corresponding chord on the piano. Currently, the synthesizer generates I, IV, and V major chords for simplicity. ## How we built it We built this using OpenCV, PyAudio, Kinect, and a Raspberry Pi 3 B+. ## Challenges we ran into The Raspberry Pi has limited computational headroom, so we were unable compile the pytorch library to link our convolutional neural network we trained to detect the hand gestures. We used some open source code for [hand detection](https://github.com/madhav727/medium) instead. A pre-trained model in PyTorch would have allowed us to have more control over the notes, pitch, etc. ## Accomplishments that we're proud of * We worked around a limited timeframe and resources and create a useful chord player. * Implemented a CNN to detect sign language for letters A to I using pytorch on individual images. ## What we learned * Compilation on the Rasberry Pi takes a longgg time. Configuring more swap space is very necessary when using cmake on a system with limited RAM like the Pi. * The Kinect is difficult to configure on the Pi. Even with OpenCV compiled with FFMPEG support, we couldn't get a RGB colour channel. Instead, we used the fswebcam CLI program to capture images with the Kinect at around 1 FPS. * You can create chords via superposition of independent frequencies. We used PyAudio and NumPy to generate chords that traditionally sound nice together (i.e. I, IV, V) * To install PyTorch on the Pi, we would need Python 3.6 or higher. Building Python 3.6.5 failed twice (~4 hours). PyTorch supposedly takes 12 hours! Space was limited as well, so configuring more swap space would have been prohibitive. Next time we could maybe use a VM running raspbian stretch to build Python and PyTorch. ## What's next for wavechord * An an improved input mechanism. * Improved sound engine (Pi audio was difficult to configure. Jack server?) * Maybe use a different audio library, such as [Pyo](http://ajaxsoundstudio.com/software/pyo/) (DSP) * Depth sensor support with Kinect - i.e. vibrato, pitch bending, volume adjustment, aftertouch simulation
## Inspiration We wanted to learn more about Machine Learning. We thought of sign language translation after hearing the other theme was connectivity, even though that meant the technology more than the word itself. ## What it does The board interfaces with a webcam, where it takes a picture. The picture is then processed into a gray-scale image, and then centered around the hand. After using the Binary Neural Network (and the weights generated from our machine learning training) the image is judged and then outputs a string saying what the hand sign is. ## How we built it We found a MNIST type sign language data set on Kaggle. Using Pynq's library and CUDA we trained our own neural network on one of our personal laptops. This generated weights to be implemented on PynqZ2 with their Binary Neural Network Overlay. We then used their example scripts as a foundation for our own scripts for taking pictures and processing images. ## Challenges we ran into Some of the challenges we faced were mostly centered around creating the machine learning environment that generated the weights used to judge the images taken by the webcam. A lot of libraries that we had to use (ie Theano) were often outdated, causing us to downgrade/reinstall many times to get the whole thing working. Additionally, setting up the board to interface with the webcam was also an issue. The SD card would often be corrupted and slow down our progress. ## Accomplishments that we're proud of Considering our lack of experience using machine learning libraries, we were proud to be able to setup the machine learning environment that allowed us to train. Additionally, we were able to learn about the generally approach machine learning. ## What we learned We learned how to set up the CUDA library and how to use the basics of the PynqZ2 Board and a few of its overlays. ## What's next for Sign Language Image Translation with PynqZ2 Board Ideally, we would like to have the board read from a video instead of a set of images. This would allow for live and much more accurate image translation, and would also allow us to push the board to its full capabilities.
## Inspiration Our inspiration has and will always be to empower various communities with tech. In this project, we attempted to build a product that can by used by anyone and everyone to have some fun with their old photos and experience them differently, using mainly auditory sense. Imagine you had an old photo of your backyard from when you were 5 but you don't remember what it sounded like back then, that's a feeling we are trying to bring back and inject some life into your photos. ## What it does The project ReAlive adds realistic sounding audio to any photo you want, trying to recreate what it would have been like back at the time of the photo. We take image as an input and return a video that is basically the same image and an audio overalyed. Now this audio is smartly synthesized by extracting information from your image and creating a mapping with our sound dataset. ## How we built it Our project is a web-app built using Flask, FastAPI and basic CSS. It provides a simple input for an image and displays the video after processing it on our backend. Our backend tech-stack is Tensorflow, Pytorch and Pydub to mix audio, with the audio data sitting on Google Cloud storage and Deep Learning models deployed on Google Cloud in containers. Our first task was to extract information from the image and then create a key-map with our sound dataset. Past that we smartly mixed the various audio files into one to make it sound realistic and paint a complete picture of the scene. ## Challenges we ran into Firstly, figuring out the depth and distance calculation for monochannel images using CNN and OpenCV was a challengin task. Next, applying this to sound intensity mapping ran us into few challenges. And finally, deploying and API latency was a factor we had to deal with and optimize. ## Accomplishments that we're proud of Building and finishing the project in 36 hours!!! ## What we learned We learned that building proof-of-concepts in two days is a really uphill task. A lot of things went our way but some didn’t, we made it at the end and our key learnings were: Creating a refreshing experience for a user takes a lot of research and having insufficient data is not great ## What's next for ReAlive Image animation and fluidity with augmented sound and augmented reality.
losing
## Inspiration As post-secondary students ourselves, one of the most stress inducing things are exams, so we decided to do something about it. ## What it does It registers students into a study group and allows them to each come up with questions they have about an upcoming exam. All the questions are then pooled together and given out to each student who will then answer those from their peers. Finally, all the responses are shown with the questions and the group can talk to each other to discuss the responses and gain additional insight. ## How I built it This project was built using two react servers, one for the front end, where the users will interact with and send data using Solace, while the other is the backend that receives messages and broadcasts them to any subscribers. ## Challenges I ran into Setting up and understanding how Solace worked was no easy task and took several tries and mentor help to fully overcome. ## Accomplishments that I'm proud of Having the clients and servers communicate seamlessly across multiple devices. ## What I learned It's always a good idea to talk to others when you're stuck and to narrow down problems step by step ## What's next for YesBrainer There is many branches for extending the project further, one way is to store group session results into a persistent database and compare them across multiple groups. Forum style discussion places can also be created in order to foster additional teamwork and mutual knowledge gain.
## 💡Inspiration Your professor assigns a YouTube lecture to review, but you end up watching Naruto or a Mr. Beast video instead. GEESE WHAT, you missed the deadline! We've all been there, and it's not a great feeling ... that's where GoosePeak steps in to help 💪. GoosePeak is a chrome extension that's dedicated towards helping you reach your "peak" productivity, by "peeking" at your screen! With GoosePeak, you find your own goose buddy 🪿that grows more and more frustrated as you neglect your tasks and absorb irrelevant content! ## 🧐What it does GoosePeak first asks for your task list. Once your tasks are confirmed, you are free to browse the web. As long as you stay on sites that align with your tasks, your virtual goose buddy remains calm and content. Very demure, very mindful 😌! However, if you stray off course, be prepared to face the wrath of your goose buddy as their frustration grows 😡. Not only that, you'll be faced with an annoying pop-up that will steer you right back to being productive and navigating the right links. ## ❓How we built it In developing GOOSEPEAK, we used the following technologies: **Design:** Played around with Figma to create the different pages and UI elements. **Front-end:** We built our interface and pop-ups using React, HTML, TailwindCSS, and JS **Back-end:** Our back-end is powered by Node.js, and we also leveraged the Cohere API. **APIs:** We have used Cohere's API to extract information from the links and identify whether they include keywords that are relevant to the task list. ## 💢Challenges we ran into Like any hacker, we encountered our own issues with merge conflicts. We also struggled with connecting to the Cohere API in a chrome extension environment. It even took all of us quite some time in understanding everyone’s code and building our code based on each other’s outputs. ## 🎖️Accomplishments that we're proud of Although our team has little to no experience in developing chrome extensions, we are very proud that we have come this far to create a working MVP in a fast-paced environment. We are also excited to see our adorable geese come to life, transforming from a simple idea into an engaging feature on our Chrome extension 🖊️ ## 😎What we learned * The detailed process of creating a chrome extension * How to incorporate the cohere API into our development environment successfully * Adding appropriate design choices (eg. color scheme, fonts, text colors) for a friendly user interface ## 👀What's next for GoosePeak * Include a machine learning aspect, where it detects if individuals are looking down on their phone during their productivity time * Transform into a mobile app that detects productivity on social media * Similar to Duolingo, include streaks to further encourage individuals to be productive daily
## 💡 Inspiration It's job search season (again)! One of the most nerve-wracking aspects of applying for a job is the behavioural interview, yet there lacks a method to help interviewees prepare effectively. Most existing services are not able to emulate real-life interview scenarios sufficiently, given that the extent of the practice questions asked are limited without knowledge of the interviewee's experience. In addition, interviewers rarely provide constructive feedback, leaving interviewees feeling confused and without a plan for improvement. ## 💻 What it does HonkHonkHire is an AI-powered interview practice application that analyzes your unique experiences (via LinkedIn/resume/portfolio) and generates interview questions and feedback tailored for you. By evaluating facial expressions and transcribing your responses, HonkHonkHire provides useful metrics to give actionable feedback and help users gain confidence for their job interviews. ## ⚒️ How we built it HonkHonkHire was built on Node.JS back-end, with MongoDB Atlas Database to store user's data. We also used Firebase to store resume .pdf files. The user interface was designed in Figma, and developed using Javascript, HTML and CSS. The facial recognition was implemented using Google MediaPipe Studio, and Cohere's API was used to generate the personalized questions and feedback. ## 🪖 Challenges we ran into This project itself was very ambitious for our team, given that it involved learning and applying a lot of unfamiliar technologies in a short period of time. As we had never worked together as a team before, it took some time to familiarize ourselves with each other's strengths, weaknesses, and communication styles. ## 🌟 Accomplishments that we're proud of Our team is proud that we were able to deliver majority of the MVP features, and to have created something we would all love to use for our future job searches. Learning new technologies such as Cohere's API and OCR Space, and being able to employ them in our application was also very rewarding. ## 🧠 What we learned Each one of our team members challenged themselves while creating this application. Our designer ventured into the world of programming and learned some HTML/CSS while coding a webpage for the very first time (yay)! Our front-end developer challenged herself by focusing on fundamentals such as Vanilla JS, instead of using more familiar frameworks. Another one of our developers learned about new APIs by reading documentations, and transferring data from front-end to back-end (and vice versa) using Node.JS. Our primary back-end developer challenged himself to explore facial/emotional expression and behavioural motion detection for the first time. ## 👀 What's next for HonkHonkHire We would love to further enhance user experience by offering more detailed metrics such as talking pace, response length, and tone of voice. Through the use of these metrics, we hope to give users the ability to track their improvements over their professional journey and encourage them to continue to improve upon their behavioural interview skills.
losing
## Inspiration Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier. ## What it does SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less. ## How we built it We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript. ## Challenges we ran into The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using. ## Accomplishments that we're proud of Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end. ## What we learned ## What's next for SharkFin
## Inspiration In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other. ## What it does **Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?) Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective. After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity. ## How we built it **Tech Stack** * Front-end/UI + Flutter and dart + Adobe XD * Backend + Firebase - Cloud Firestore - Cloud Storage - Firebase Authentication **Details** * Front end was built after developing UI mockups/designs * Heavy use of advanced widgets and animations throughout the app * Creation of multiple widgets that are reused around the app * Backend uses gmail authentication with firebase. * Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages. * Images are stored in firebase storage to keep the source files together. ## Challenges we ran into * Initially connecting Firebase to the front-end * Managing state while implementing multiple complicated animations * Designing backend and mapping users with each other and allowing them to chat. ## Accomplishments that we're proud of * The user interface we made and animations on the screens * Sign up and login using Firebase Authentication * Saving user info into Firestore and storing images in Firebase storage * Creation of beautiful widgets. ## What we're learned * Deeper dive into State Management in flutter * How to make UI/UX with fonts and colour palates * Learned how to use Cloud functions in Google Cloud Platform * Built on top of our knowledge of Firestore. ## What's next for Change My Mind * More topics and User settings * Implementing ML to match users based on maturity and other metrics * Potential Monetization of the app, premium analysis on user conversations * Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC.
## Inspiration The inspiration for the project was to design a model that could detect fake loan entries hidden amongst a set of real loan entries. Also, our group was eager to design a dashboard to help see these statistics - many similar services are good at identifying outliers in data but are unfriendly to the user. We wanted businesses to look at and understand fake data immediately because its important to recognize quickly. ## What it does Our project handles back-end and front-end tasks. Specifically, on the back-end, the project uses libraries like Pandas in Python to parse input data from CSV files. Then, after creating histograms and linear regression models that detect outliers on given input, the data is passed to the front-end to display the histogram and present outliers on to the user for an easy experience. ## How we built it We built this application using Python in the back-end. We utilized Pandas for efficiently storing data in DataFrames. Then, we used Numpy and Scikit-Learn for statistical analysis. On the server side, we built the website in HTML/CSS and used Flask and Django to handle events on the website and interaction with other parts of the code. This involved retrieving taking a CSV file from the user, parsing it into a String, running our back-end model, and displaying the results to the user. ## Challenges we ran into There were many front-end and back-end issues, but they ultimately helped us learn. On the front-end, the biggest problem was using Django with the browser to bring this experience to the user. Also, on the back-end, we found using Keras to be an issue during the start of the process, so we had to switch our frameworks mid-way. ## Accomplishments that we're proud of An accomplishment was being able to bring both sides of the development process together. Specifically, creating a UI with a back-end was a painful but rewarding experience. Also, implementing cool machine learning models that could actually find fake data was really exciting. ## What we learned One of our biggest lessons was to use libraries more effectively to tackle the problem at hand. We started creating a machine learning model by using Keras in Python, which turned out to be ineffective to implement what we needed. After much help from the mentors, we played with other libraries that made it easier to implement linear regression, for example. ## What's next for Financial Outlier Detection System (FODS) Eventually, we aim to use a sophisticated statistical tools to analyze the data. For example, a Random Forrest Tree could have been used to identify key characteristics of data, helping us decide our linear regression models before building them. Also, one cool idea is to search for linearly dependent columns in data. They would help find outliers and eliminate trivial or useless variables in new data quickly.
winning
## Not All Backs are Packed: An Origin Story (Inspiration) A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality. ## The Got Your Back, Pack: **U N P A C K E D** (What's it made of) GPS Location services, 9000 mAH power battery, Solar charging, USB connectivity, Keypad security lock, Customizable RBG Led, Android/iOS Application integration, ## From Backed Up to Back Pack (How we built it) ## The Empire Strikes **Back**(packs) (Challenges we ran into) We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not. Also, making connection of hardware and app with the API was tricky. ## Something to Write **Back** Home To (Accomplishments that we're proud of) ## Packing for Next Time (Lessons Learned) ## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!") The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider.
## Inspiration The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect. ## What it does Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes. ## How we built it We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps. ## Challenges we ran into Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone. ## Accomplishments that we're proud of We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances. ## What we learned Locationing is hard! Especially indoors or over short distances. Firebase’s realtime database was extremely easy to use and very performant Distributing the data processing between the server and client is a balance worth playing with ## What's next for Hotspot Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps. There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
## Inspiration As students in a busy city, safety is our topmost concern. We need to stay connected, so that in case of an emergency, others can quickly find out about it and take action. However, existing tools do not offer the right features and are not convenient to use. As such, we wanted to build a solution that would allow students to easily share their location with the people they trust, in case of an emergency. This way, another person can have all the necessary information to notify the authorities, try to find and help their trustees. ## What it does *I'll Be Back* allows students to say "I'm going [there], and keep [my trusted friend] up to date just in case." The walker would enter a trusted person's name and phone number, and also add where they're going, when they are coming back, and a short message. The trusted person will get an SMS notification that their friend is going out, to stay alert in case an emergency comes up. If something happens, the walker can alert the trustee with a single click and send their *I'll Be Back* app URL via message. Additionally, *I'll Be Back* watches for anomalies such as phone shutting off or walker taking too long and notifying the trustee. The trustee would see where their friend was going to and from, and what their last synced location was. ## How we built it *I'll Be Back* is a reactive web app with React on the front end, and NodeJS and Express on the back end. We used Socket.io to efficiently sync the walker's location with our server. The whole website is hosted on Google Cloud and uses a Domain.com domain: `illbeback.tech`. For all notifications to the trustee, we use Twilio. ## Challenges we ran into * Managing the deadline - we had to pivot a few times until we had a direct vision of the app we wanted to build. Unfortunately, we had to cut some of the features we envisioned, such as smart anomaly detection, to finish the app on time. * Working on one project in a group - as a beginner hackathon team, we had to efficiently distribute work that would best fit our strengths. While working together was a lot of fun, we had to quickly resolve code merge conflicts, and help each other in case of major issues. * Picking best ideas - deciding together what features we could potentially implement in such a short period. ## Accomplishments that we're proud of * Successfully finishing, integrating, and demoing the end-to-end product. * Integrating Google Maps API to push, fetch and display walker location. * Using Twilio for efficient, SMS trustee notifications. * Figuring out how to run a GCP server and linking our `illbeback.tech` domain. * Resolving some nerve-wracking bugs in the middle of the night :) ## What we learned * How to setup GCP * How to use the Twilio API * How to integrate Google Maps with our website * How to configure website routing ## What's next for *I'll Be Back* * Collaborating with the campus police so that walkers can notify them immediately of their location in case of an emergency, improving on the existing Cal BearWalks. * Persistent storage to store the accounts of trusted friends, common destinations, etc. * Machine learning-based anomaly detection during the journeys would intelligently detect potential safety threats to the walkers.
winning
## Inspiration Have you ever had to wait in long lines just to buy a few items from a store? Not wanted to interact with employees to get what you want? Now you can buy items quickly and hassle free through your phone, without interacting with any people whatsoever. ## What it does CheckMeOut is an iOS application that allows users to buy an item that has been 'locked' in a store. For example, clothing that have the sensors attached to them or items that are physically locked behind glass. Users can scan a QR code or use ApplePay to quickly access the information about an item (price, description, etc.) and 'unlock' the item by paying for it. The user will not have to interact with any store clerks or wait in line to buy the item. ## How we built it We used xcode to build the iOS application, and MS Azure to host our backend. We used an intel Edison board to help simulate our 'locking' of an item. ## Challenges I ran into We're using many technologies that our team is unfamiliar with, namely Swift and Azure. ## What I learned I've learned not underestimate things you don't know, to ask for help when you need it, and to just have a good time. ## What's next for CheckMeOut Hope to see it more polished in the future.
## Inspiration The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19. While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea. **What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone** ## What it does The high-level workflow can be broken down into three major components: 1: Python (flask) and Firebase backend 2: React frontend 3: Stripe API integration Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend. Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API. ## How we built it We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data. ## Challenges we ran into Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow. ## Accomplishments that we're proud of Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app. ## What we learned We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’) ## What's next for G.e.o.r.g.e. Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
## Ark Platform for an IoT powered Local Currency ## Problem: Many rural communities in America have been underinvested in our modern age. Even urban areas such as Detroit MI, and Scranton PA, have been left behind as their local economies struggle to reach a critical mass from which to grow. This underinvestment has left millions of citizens in a state of economic stagnation with little opportunity for growth. ## Big Picture Solution: Cryptocurrencies allow us to implement new economic models to empower local communities and spark regional economies. With Ark.io and their Blockchain solutions we implemented a location-specific currency with unique economic models. Using this currency, experiments can be run on a regional scale before being more widely implemented. All without an increase in government debt and with the security of blockchains. ## To Utopia!: By implementing local currencies in economically depressed areas, we can incentivize investment in the local community, and thus provide more citizens with economic opportunities. As the local economy improves, the currency becomes more valuable, which further spurs growth. The positive feedback could help raise standards of living in areas currently is a state of stagnation. ## Technical Details \*\* LocalARKCoin (LAC) \*\* LAC is based off of a fork of the ARK cryptocurrency, with its primary features being its relation to geographical location. Only a specific region can use the currency without fees, and any fees collected are sent back to the region that is being helped economically. The fees are dynamically raised based on the distance from the geographic region in question. All of these rules are implemented within the logic of the blockchain and so cannot by bypassed by individual actors. \*\* Point of Sale Terminal \*\* Our proof of concept point of sale terminal consists of the Adafruit Huzzah ESP32 micro-controller board, which has integrated WiFi to connect to the ARK API to verify transactions. The ESP32 connects to a GPS board which allows verification of the location of the transaction, and a NFC breakout board that allows contactless payment with mobile phone cryptocurrency wallets. \*\* Mobile Wallet App \*\* In development is a mobile wallet for our local currency which would allow any interested citizen to enter the local cryptocurrency economy. Initiating transactions with other individuals will be simple, and contactless payments allow easy purchases with participating vendors.
winning
## Inspiration While munching down on 3 Snickers bars, 10 packs of Welch's Fruit Snacks, a few Red Bulls, and an apple, we were trying to think of a hack idea. It then hit us that we were eating so unhealthy! We realized that as college students, we are often less aware of our eating habits since we are more focused on other priorities. Then came GrubSub, a way for college students to easily discover new foods for their eating habits. ## What it does Imagine that you have recently been tracking your nutrient intake, but have run into the problem of eating the same foods over and over again. GrubSub allows a user to discover different foods that fulfill their nutritional requirements, substitute missing ingredients in recipes, or simply explore a wider range of eating options. ## How I built it GrubSub utilizes a large data set of foods with information about their nutritional content such as proteins, carbohydrates, fats, vitamins, and minerals. GrubSub takes in a user-inputted query and finds the best matching entry in the data set. It searches through the list for the entry with the highest number of common words and the shortest length. It then compares this entry with the rest of the data set and outputs a list of foods that are the most similar in nutritional content. Specifically, we rank their similarities by calculating the sum of squared differences of each nutrient variable for each food and our query. ## Challenges I ran into We used the Django framework to build our web application with the majority of our team not having prior knowledge with the technology. We spent a lot of time figuring out basic functionalities such as sending/receiving information between the front and back ends. We also spent a good amount of time finding a good data set to work with, and preprocessing the data set so that it would be easier to work with and understand. ## Accomplishments that I'm proud of Finding, preprocessing, and reading in the data set into the Django framework was one of our first big accomplishments since it was the backbone of our project. ## What I learned We became more familiar with the Django framework and python libraries for data processing. ## What's next for GrubSub A better underlying data set will naturally make the app better, as there would be more selections and more information with which to make comparisons. We would also want to allow the user to select exactly which nutrients they want to find close substitutes for. We implemented this both in the front and back ends, but were unable to send the correct signals to finish this particular function. We would also like to incorporate recipes and ingredient swapping more explicitly into our app, perhaps by taking a food item and an ingredient, and being able to suggest an appropriate alternative.
## Inspiration Everybody eats and in college if you are taking difficult classes it is often challenging to hold a job. Therefore as college students we have no income during the year. Our inspiration came as we have moved off campus this year to live in apartments with full kitchens but the lack of funds to make complete meals at a reasonable price. So along came the thought that we couldn't be the only ones with this issue, so..."what if we made an app where all of us could connect via a social media platform and share and post our meals with the price range attached to it so that we don't have to come up with good cost effective meals on our own". ## What it does Our app connects college students, or anyone that is looking for a great way to find good cost effective meals and doesn't want to come up with the meals on their own, by allowing everyone to share their meals and create an abundant database of food. ## How we built it We used android studio to create the application and tested the app using the built in emulator to see how the app was coming along when viewed on the phone. Specifically we used an MVVM design to interweave the functionality with the graphical display of the app. ## Challenges we ran into The backend that we were familiar with ended up not working well for us, so we had to transition over to another backend holder called back4app. We also were challenged with the user personal view and being able to save the users data all the time. ## Accomplishments that we're proud of We are proud of all the work that we put into the application in a very short amount of time, and learning how to work with a new backend during the same time so that everything worked as intended. We are proud of the process and organization we had throughout the project, beginning with a wire frame and building our way up part by part until the finished project. ## What we learned We learned how to work with drop down menus to hold multiple values of possible data for the user to choose from. And for one of our group members learned how to work with app development on the full size scale. ## What's next for Forkollege In version 2.0 we plan on implementing a better settings page that allows the user to change their password, we also plan on fixing the for you page specifically for each recipe displayed we were not able to come up with a way to showcase the number of $ sign's and instead opted for using stars again. As an outside user this is a little confusing, so updating this aspect is of the most importance.
## Inspiration In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)! ## What it does First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together. ## How we built it The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe. ## Challenges we ran into We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions. ## Accomplishments that we're proud of. We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application. ## What we learned Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators. ## What's next for culina Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
losing
## Inspiration Patients with this condition have a difficult time eating with regular utensils and the current products on the market are very expensive. This device provides the service to the user at a much more affordable price. ## What it does This device uses the gyroscopic sensor and accelerometer to dictate the position of the device after calibration. Using the data received from the sensors, the servo motors move in real time to negate any movement. This allows the spoon to stabilize helping the user enjoy their meal! ## How I built it ``` Using the Arduino microprocessors connected to the different sensors, information was constantly sent back and forth between the sensors and the motors to negate any movement made by the user to create a stable spoon. ``` ## Challenges I ran into The calibration process was the most challenging as it was time-consuming as well as very complicated to implement. ## Accomplishments that I'm proud of Final product ## What I learned The world is a very shaky place, with a lot of hurdles, but on the plus side, at least our design can stabilize food and help people at the same time. ## What's next for GyroSpoon Forbes Top 30 under 30
## Inspiration The inspiration for this project came to us from recognizing that this simple at the first look app could have a great social cause, and there is nothing like that on the market. We wanted to create a tool for Alzheimer’s patients that would be a single platform for aggregates tools for their individual needs We also wanted to get experience and play with different technologies. ## What it does Our app helps Alzheimer’s patients recognize people, associated memories, and manage reminders for daily activities using Augmented Reality and Machine Learning. Our reminder system helps users to keep track of their daily routines and medications. ## How we built it Our iOS application uses Microsoft Azure’s Custom Vision to recognize family and friends. The app uses coreML, the ARKit, and the Vision framework to label the recognized people in real time. Using the Houndify API custom commands and the Oracle Cloud Database, the user can verbally request and receive information about people saved to their account. We also have a voice assistant utilizing Almond Voice API offered by Stanford to greet the user. ## Challenges we ran into During the project, we ran into a few challenges -- starting with lack of documentations for some API’s and finishing with software failure (iPhone simulator) We used several powerful API’s, yet it was a challenge to implement them to our application due to the lack of good documentation. In addition, all of us were pioneers in using RESTful API’s. ## Accomplishments that we're proud of We proud that out application has great social cause. We hope, it may help the patients to ease their daily life. We are proud that we were able to overcome implementing difficulties. ## What we learned We learned how to implement different API’s such as voice assistance and computer vision. ## What's next for memory lane We need to provide users with the option of uploading images and improving accuracy of recognition.
# Inspiration When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system. # What it does Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form. # How we built it The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable. # Challenges we ran into We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging. # Accomplishments we are proud of Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds. # What we learned We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles. # Whats next for Istable We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates.
losing
## Inspiration We are a group of students passionate about the intersection of health and artificial intelligence. We saw that there was generally a lack of machine learning used in the health field compared to other fields despite the potentially large impact it could have on the wellbeing of other's. We thought that creating a webapp in the health field would allow us to create the most impact by directly helping people live a more healthy lifestyle. ## What it does KnewHealth is a health tracking webapp that provides features that aim to help users live a healthier lifestyle by utilizing deep learning. One of the main features on the webapp that accomplishes this is a calorie estimator that is able to predict how many calories a particular food is based on an image uploaded by the user. In addition, the feature would allow users to also see the nutritional breakdown of the food by providing an estimate of the amount of carbohydrates, proteins, and fats. Also, this feature would output whether the food in the image is considered either a healthy or unhealthy food. ## How we built it The webapp was built using flask. Login information for users and food image data are stored in tables in a CockroachDB database. The model used for the calorie estimator was built using a pre-trained vision transformer on 100+ food classes. ## Challenges we ran into There were a few challenges we faced throughout the course of building this project. One of the biggest challenges was training the model used in the calorie estimator. We had originally planned to utilize transfer learning with the ResNet-50 model to make predictions for the calorie estimator, however, we ran into many technical issues with this that we thought could not be fixed within a reasonable amount of time. Due to this, we changed our approach to using a pre-trained vision transformer instead which circumvented many of the issues we were facing. ## Accomplishments that we're proud of We are very proud of building a fully functioning webapp that is able to take in a food image and output the calorie, carbohydrate, protein, and fat estimates in the food in a very short amount of time. The webapp even allows users to create an account with login with a username and password which the user can use to login to use the calorie estimator. ## What we learned We learned alot about using several technologies such as flask, pytorch, and CockroachDB. Since much of the team did not have much experience with those technologies, it was a challenging but rewarding experience learning how to use them in a short period of time. ## What's next for KnewHealth We plan to implement more features such as a pushup and squat counter using video to help users keep track of how many of a certain exercise users are doing. In addition, this would ensure the user is doing the exercise correctly as it would only count the exercise if it is done with proper form. With these additional features, we hope to continue working towards our goal of allowing people to live a more healthy lifestyle.
## Inspiration Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis. ## What it does What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods. ## How we built it Using Flask, HTML,CSS, and Python for backend. ## Challenges we ran into We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication. ## Accomplishments that we're proud of Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life. ## What we learned How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide. ## What's next for NutriScan -Working ML -Use of camera as an input to the program -Better UI -Responsive -Release
## Inspiration Much is known about how caloric input can affect body appearance. Little, however, is talked about the effects of healthy eating habits on the production of neurotransmitters and mental health outcomes. Neurotransmitters are chemicals that help the brain communicate information across multiple neurostructures. Dietary habits play an important role in the production of neurotransmitters as they provide molecules (in the form of nutrients, for example) that favor the production of certain behaviors. In short, some neurotransmitters favor the expression of certain behaviors and mood changes, therefore, dietary habits can play a role in how humans behave and interact with the world. Our idea is based on very recent, innovative neuroscience research that is working to understand the hidden ways in which neurotransmitters contribute to humans’ wellbeing. Showing people how their dietary habits influence their behavior and mood change is beneficial because it creates a channel for change and improvement in habits. A poor diet has been scientifically linked to the development of mental and physical discomfort. Even worse, a poor diet can strengthen symptoms and increase the likelihood of the development of mental illnesses such as depression and anxiety. Our app can serve as an incentive for people to change their lifestyle for healthier habits. ## How we built it We started, FoodMood to be an android app that will predict your mood on the basis of the food that you consume. The user will take a photo of their food and upload it and on that basis, we can predict their mood. We classified data in CSV files and built the Computer Vision backend, GAN. We tried to build the CV API which to is called in the Android App which didn't work and took a substantial amount of time. So we pivoted it to be a web app. We used Bubble to make the UI and connect API via Bubble API connector. ## What it does Foodmood allows users to scan their meals on the spot and receive a quick report on how the ingestion of that food might affect their mood and behavior throughout the day. Our app also provides a quick environmental report to the user, showing how the production of that food item affected Carbon emissions. This is especially important as visualizing this type of data in a simple way can be a powerful incentive for people to change their habits. Our app uses data backed up by neuroscience research to assess food items and tell users that they might see a certain type of mood change as a result of consuming that particular meal. Eating a burger, for example, might lead to an increase in the brain’s production of dopamine receptors, which translates to a person feeling pleasure and satisfaction. Over a short period of time, however, that satisfaction turns into a sensation of lethargy and tiredness, which is characteristic of consumption of food that is high in fat and sugar. Our app will identify the burger (or whatever they are eating) and show the user how they might feel shortly as a result of that intake. Some fruits and vegetables are known for increasing attention and the ability to learn new skills. When somebody sees that information through our app, they might become naturally inclined to eating more healthy items as an outcome of having that information about food benefit fresh in their brain. With a quick picture analysis, our app shows users how their mood might change, giving insight into how better eating habits may favor a happier lifestyle. ## Challenges we ran into The first major blockage was determining what workflow we had to use for the computer vision model during the initial part of the hackathon. We were planning to have it done with a pre-trained model, but determined for the scope of the project that we would use an existing API from Clarai to achieve classification of different kinds of food. We also ran into logistical issues when it came to collaboration, such as different time zones and minor technical problems with internet connection. There was also a learning curve in customizing Bubble for ML purposes. Also we had scope creep issues, where we initially were planning to create an android app in Java, but determined halfway through there was not enough time to debug the API calls on Android Studio and went with using Bubble instead. ## Accomplishments that we're proud of We were very proud to have finished a project to present! We ran through many obstacles in this hackathon, so even finishing a final product to present is a huge accomplishment for us! The accomplishment we are the most proud of is the idea itself, which could realistically become a real-world project that helps people improve their mental health and daily moods in a very simple way.
losing
## Inspiration We wanted to make eco-friendly travel accessible and enjoyable for all, while also supporting local businesses. ## What it does The Eco-Friendly Travel Planner is an AI-driven platform that offers personalized travel recommendations with a strong focus on sustainability. It provides travellers with eco-conscious destination suggestions, and budget-friendly options, and supports local businesses. By leveraging AI, it tailors recommendations to each traveller's preferences, making eco-friendly travel easier and more enjoyable. ## How we built it We used HTML, CSS, JS, Node.js, MongoDB, Google Maps API, Carbon Footprint API, GPT-3 API, and more to build this. ## Challenges we ran into How to use and link the APIs, we spend hours trying to make the APIs run as they should. ## Accomplishments that we're proud of We are all proud that we were able to develop this project within so many hours, we are also happy that we were able to learn new technologies and APIs to build this project. ## What we learned How to use APIs, how to use Mongo DB and we learned about frameworks such as bootstrap and more ## What's next for Eco-Friendly Travel Planner
## 🌟 Inspiration Studies show that many people know about climate change and sustainable behaviors, but not as many people adjust their actions accordingly. This is largely due to inconvenience and the perception of no environmental impact. People know that climate change is bad, but they don’t know where to start. We want to bridge the gap between knowledge and action, making it easy for people to foster sustainable behaviors in their everyday lives. ## 💡 What it does Our website, Carbon Cut, assesses a person’s carbon footprint and offers personalized recommendations on how to make the most impact with the least effort. We want to emphasize that **individuals are NOT the problem, but they CAN be part of the solution**. By taking the effort of research out of their hands and quantifying their impact, Carbon Cut answers users’ two most prevalent pain points. We also have established the groundwork to implement credits, rewarding users for their sustainable choices. They will be able to use these credits for discounts on partner brands, such as (ideally!) Cotopaxi, Patagonia, Reformation, and other sustainable brands. These environmental features are: * **Item checker:** You can upload a photo from the tag that contains the details of any product (mostly clothes) and it will assign a grade to your item based on the carbon impact it has. * **Sustainable Restaurants**: You can input any location you want, and it will display you a map with the most sustainable restaurants near you in a map, with their name and location * **Transportation Tracker**: Just select the origin and destination of your travel and it will display you the most sustainable solution to arrive at that place Furthermore, in order to create more consciousness about the environment, we have a chatbot in which you can "talk" to some of the priority places of the planet, such as the Amazon Rainforest, Northern Great Plains, the Coral Triangle, among others. You can ask whatever you want to these places as if they were a person ## 🛠️ How we built it In the creation of Carbon Cut, we began with research and brainstorming. Literature showed that high prices, perception of no environmental impact, greenwashing, inconvenience, and social image were among the biggest barriers to sustainable action. Thus, we established features to address many of these challenges and sought out feedback from other hackers (Doc link posted with all feedback and iterations). The resounding consensus of the 8 hackers we surveyed was that inconvenience and perception of no environmental impact were the two most prevalent barriers to action. With that reassurance in mind, we tailored our website to emphasize impact and optimize convenience. From a technical perspective, the front end was made using HTML, CSS, and Next.Js. The backend was made by creating an API using FastAPI. * The Item Checker was made using Computer Vision (Optical Character Recognition) and a custom few-shot LLM built on gpt-3 * The Restaurants feature was made using Google Places API * The Routes feature was made using Google Routes API * The Planet LLM is a Retrieval Augmented Generative Model trained with data from WWF, using a Vector Database from Chroma with dozens of documents ## 🚧 Challenges we ran into All's well that ends well. With that being said, no hackathon is complete without its challenges. We had 2 main challenges. Firstly, we had some issues when deploying our Chroma Vector Database, due to different dependencies and it being too heavy to be deployed on Render, so we used Google App Engine for the RAG model. Secondly, we also had some issues when deploying the image-to-text model, since we used Pytesseract and the model needed to install a file on our computer, which couldn't be done in the deployed instance, hence we had to change the way we identified the text in any image. ## 🏆 Accomplishments that we're proud of We are proud of our team. Coming from universities in Pennsylvania, Florida, and Mexico, we formed our team on Slack and met at the TreeHacks orientation. From that moment on, we maintained open communication, kept a strong work ethic, and shared many laughs. We are also proud of the progress we have made on Carbon Cut in 36 hours and the potential it holds to shape the lives of many people. One hacker, who preferred to remain anonymous, said of Carbon Cut, "When you see carbon footprint used in marketing it's always very condemning. It’s nice to see something focused on action." Another noted, in our first round of user pain point research, that inconvenience was one of their major barriers and that they would use a website such as Carbon Cut, "especially if it showed me how much impact it would have." Thanks to our multi-disciplinary team and the thoughtful responses of our fellow hackers, Carbon Cut now features a dashboard that quantifies impact in a variety of ways, from percent change to trees planted to Olympic-sized swimming pools of water saved. Through the use of social math, we are proud that Carbon Cut can be used, understood, and enjoyed by a wide range of users. ## 📚 What we learned This project was an exercise in all of our skills, including our ability to learn! We tackled Carbon Cut with an army of old and new skills, and even some that we learned at the various TreeHacks workshops. From Next.js to RAG Models to generative UI (shoutout to Guillermo Rauch from Vercel!), this weekend was nothing short of an exercise in experiential learning. ## 🔮 What's next for Carbon Cut Next up for Carbon Cut is increased precision carbon tracking, partnerships, and credits. We hope to continually improve our carbon emissions tracking and keep personalized recommendations modern and up-to-date with current technologies. Additionally, we plan to establish credits to reward users for choosing sustainable alternatives. These credits will be redeemable for discounts towards partnered sustainable brands, such as Cotopaxi and Patagonia. Lastly, as always, Carbon Cut is here to help *you*. If you have any requests or ideas on how to improve Carbon Cut, please don't hesitate to reach out!
## Inspiration Often times, we travel to destination based on its popularity like New York or Hawaii, rather than looking for places that we would really enjoy the most. So we decided to create a web application that would recommend travel destinations to users based on images they upload based on their own interests. Each suggestion is tailored to every query. ## What it does This website will generate a list of suggestions based off a user inputted image via url or upload. Using Microsoft Azure Computer Vision, the website will generate a list of suggested travel destinations, including lowest roundtrip airfare and predicted non-flight related vacation costs, by analyzing the tags from Computer Vision. ## How we built it We have three main documents for code- the database of locations, the html file for the website, and the javascript file that defines the methods and implements the API. The html website receives user input and returns the suggestions. The database contains formatted data about various possible travel destinations. The javascript file gathers information from each inputted photo, and assigns the photo scores in various categories. Those scores are then compared to those in the database, and using factors such as cost and location type, we generate a list of 7 travel recommendations. ## Challenges we ran into While trying to acquire a suitable database, we attempted to scrape Wikipedia articles using node.js. After using cheerio and building the basic structure, we found that the necessary information was located in different parts of each article. We were ultimately unable to write code that could filter through the article and extract information, and had to hand generate a database after 5 or more hours of work. Another challenge was finding efficient comparison method so our website did not take too long to process photos. ## Accomplishments that we're proud of Our website accomplished our main goal. It is able to take in an image, process the tags, and return a list of possible destinations. We also have a basic algorithm that maximizes the accuracy of our website given the current database format. ## What we learned Since our team had first-time hackers, who were also first time javascript coders, we learned a great deal about how the language functioned, and standards for coding website. We also learned how to share and update code through gitHub. More specifically to our project, this was the first time we used Microsoft Azure and Amadeus APIs, and learned how to implement them into our code. ## What's next for Travel Match We want to build a more comprehensive database, that includes more locations with more specific data. We also want to come up with a more efficient and accurate comparison algorithm than we currently have.
partial
## Inspiration Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life. Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life. ## What it does Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text. ## How we built it The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app. --- The back-end service is written in Go and is served with NGrok. --- We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud. --- We make use of the Google Vision API in three ways: * To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc. * To run Optical Character Recognition on text in the real world which is then read aloud to the user. * For label detection, to indentify objects and surroundings in the real world which the user can then query about. ## Challenges we ran into There were a plethora of challenges we experienced over the course of the hackathon. 1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go. 2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded. 3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go. ## Accomplishments that we're proud of Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app. Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software. We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app. ## What we learned Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack. Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis. Zak learned about building a native iOS app that communicates with a data-rich APIs. We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service. Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges. ## What's next for Sight If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app. Ultimately, we plan to host the back-end on Google App Engine.
## \*\* Internet of Things 4 Diabetic Patient Care \*\* ## The Story Behind Our Device One team member, from his foot doctor, heard of a story of a diabetic patient who almost lost his foot due to an untreated foot infection after stepping on a foreign object. Another team member came across a competitive shooter who had his lower leg amputated after an untreated foot ulcer resulted in gangrene. A symptom in diabetic patients is diabetic nephropathy which results in loss of sensation in the extremities. This means a cut or a blister on a foot often goes unnoticed and untreated. Occasionally, these small cuts or blisters don't heal properly due to poor blood circulation, which exacerbates the problem and leads to further complications. These further complications can result in serious infection and possibly amputation. We decided to make a device that helped combat this problem. We invented IoT4DPC, a device that detects abnormal muscle activity caused by either stepping on potentially dangerous objects or caused by inflammation due to swelling. ## The technology behind it A muscle sensor attaches to the Nucleo-L496ZG board that feeds data to a Azure IoT Hub. The IoT Hub, through Trillo, can notify the patient (or a physician, depending on the situation) via SMS that a problem has occurred, and the patient needs to get their feet checked or come in to see the doctor. ## Challenges While the team was successful in prototyping data aquisition with an Arduino, we were unable to build a working prototype with the Nucleo board. We also came across serious hurdles with uploading any sensible data to the Azure IoTHub. ## What we did accomplish We were able to set up an Azure IoT Hub and connect the Nucleo board to send JSON packages. We were also able to aquire test data in an excel file via the Arduino
## Inspiration On the bus ride to another hackathon, one of our teammates was trying to get some sleep, but was having trouble because of how complex and loud the sound of people in the bus was. This led to the idea that in sufficiently noisy environment, hearing could be just as descriptive and rich as seeing. Therefore to better enable people with visual impairments to be able to navigate and understand their environment, we created a piece of software that is able to describe and create an auditory map of ones environment. ## What it does In a sentence, it uses machine vision to give individuals a kind of echo location. More specifically, one simply needs to hold their cell phone up, and the software will work to guide them using a 3D auditory map. The video feed is streamed over to a server where our modified version of the yolo9000 classification convolutional neural network identifies and localizes the objects of interest within the image. It will then return the position and name of each object back to ones phone. It also uses the Watson IBM api to further augment its readings by validating what objects are actually in the scene, and whether or not they have been misclassified. From here, we make it seem as though each object essentially says its own name, so that individual can essentially create a spacial map of their environment just through audio cues. The sounds get quieter the further away the objects are, and the ratio of sounds between the left and right are also varied as the object moves around the use. The phone are records its orientation, and remembers where past objects were for a few seconds afterwards, even if it is no longer seeing them. However, we also thought about where in everyday life you would want extra detail, and one aspect that stood out to us was faces. Generally, people use specific details on and individual's face to recognize them, so using microsoft's face recognition api, we added a feature that will allow our system to identify and follow friend and family by name. All one has to do is set up their face as a recognizable face, and they are now their own identifiable feature in one's personal system. ## What's next for SoundSight This system could easily be further augmented with voice recognition and processing software that would allow for feedback that would allow for a much more natural experience. It could also be paired with a simple infrared imaging camera to be used to navigate during the night time, making it universally usable. A final idea for future improvement could be to further enhance the machine vision of the system, thereby maximizing its overall effectiveness
winning
## Inspired by Walt Disney *"Disneyland will never be completed. It will continue to grow as long as there is imagination left in the world." -Walt Disney* Words from the man who inspired millions to dream bigger resonated with us as we undertook developing Disney Augmented. As huge Disney fans, we wanted to add another level of interactivity to Disney theme parks and entertain park visitors. We were disappointed by the hostile user design of the printed maps and lack of interaction between visitors and their environments in less staffed regions of the park. To solve these issues, we believe Augmented Reality (AR) can help immerse users in their environments at Disney and create new opportunities for interactions. This will create a more entertaining and memorable user experience for all Disney Park visitors. ## What is it? Our Disney Augmented app adds another level to the Disney experience. Navigating parks can always be a challenge to families, and paper and mobile maps can often be complicated and annoying. The Disney Augmented app provides real-time Augmented Reality navigation, allowing users to point their phone in the direction and get pinpoint locations with distances, and wait-times to sites of interest (in AR). Moreso, users can get additional information from the environment like seeing hovering AR displays from scanning ride signs and seeing additional live AR graphics of relevant Disney characters in the environments they are in. [Click here to preview the real-time Augmented Reality navigation used in Stanford's Campus](https://youtu.be/XAflkYuwkHQ) Second, our Disney augmented app adds additional ways for the users to engage in the environment in the form of mini-games. Users can use the app to interact with the park in novel ways such as embarking on a scavenger hunt for hidden mickeys using the app to track their progress and get hints. [Click here to preview the Hidden Mickey Scavenger Hunt Game](https://imgur.com/a/AlUzhIr) ## Developing the App We set up this project using Apple's ARKit with the goal in mind to create an Augmented Reality application for user's to use while in a Disney park. This project was composed of two verticals: 1) A focus on a more user-friendly AR map to rides with information on rollercoasters 2) A focus on designing an interactive game for tourists as they walked through the park or were idly standing in line We iteratively developed a virtual world around any user and using data from their phone's kinetic sensors and live GPS data, we create a 3D coordinate system around them highlighting nearby landmarks and suspend information about each landmark in the real world. Further, we created an ARResource Group database of many pictures of Hidden Mickeys and created a detection algorithm that highlights when users have found a Hidden Mickey and we will create a plane in front of the Hidden Mickey and provide a special video and congratulations to the user. ## Challenges In creating the AR Map software, we found it especially challenging to create persisting and localized labels that accurately showed users key landmarks around them with respect to user's locations. We made use of existing SLAM techniques and linear algebra abstractions to create accurate resizeable scaled mappings of user's surroundings using real-time data from the phone's sensor kinetics (such as the gyroscope and camera) and Google Map latitude and longitudinal data. In creating the Hidden Mickey Mouse Scavenger Hunt Game, we ran into hurdles with developing an algorithm to detect Hidden Mickeys. Ultimately we ended up creating a database of existing images of Hidden Mickeys and used computer vision techniques to find whether the camera feed detected the presence of a Hidden Mickey. ## Accomplishments that we're proud of We are proud of finishing this hackathon and shipping a prototype. This was our first hackathon and we learned so much with developing with ARKit. We came in on Friday with no knowledge of ARKit and spent all weekend learning because we were so inspired to create an innovative AR experience for kids to enjoy at Disney. There were more than a few times during the weekend when we thought we couldn’t get a demo working, but we ended up working through it all. ## What we learned Through undertaking this project we learned more of the iOS development process and specifically expressed mastery of augmented reality design and existing Apple packages such as ARKit, Vision, CoreML, and CoreLocation. We learned more about the potential of AR in creating better applications rooted in Human-Computer Interaction. Finally, we learned how to interface our backgrounds in statistics, mathematics, and machine learning into new endeavors in computer vision such as with the problem of simultaneous localization and mapping. ## What's next for Disney Augmented We hope we can continue to work on developing this AR experience and integrate it into Disney’s official mobile app for everyone to use. It’d be our dream come true to inspire others to dream bigger and to experience an immersive experience through this AR app. Our label can easily be integrated with the current Disney World app to include information such as wait time, height requirements, and accessibility services to accommodate all parties. In the future, we hope to create additional landmarks for users to take advantage of such as bathrooms, dining areas, shops, and other 'hidden secrets'. At the moment, what's next for Disney Augmented is bringing signs to life - creating stronger visual and infographics that users can read on the go to make their Disney experience better.
## Inspiration * Inspired by issues pertaining to present day social media that focus on more on likes and views as supposed to photo-sharing + We wanted to connect people on the internet and within communities in a positive, immersive experience * Bring society closer together rather than push each other away ## What it does * Social network for people to share images and videos to be viewed in AR * removed parameters such as likes, views, engagement to focus primarily on media-sharing ## How we built it * Used Google Cloud Platform as our VM host for our backend * Utilized web development tools for our website * Git to collaborate with teammates * Unity and Vuforia to develop Ar ## Challenges we ran into * Learning new software tools, but we all preserved and had each other's back. * Using Unity and learning how to use Vuforia in real time ## Accomplishments that we're proud of * Learning Git, and a bunch more new software that we have never touched! * Improving our problem solving and troubleshooting skills * Learning to communicate with teammates * Basics of Ar ## What we learned * Web development using HTML, CSS, JavaScript and Bootstrap ## What's next for ARConnect * Finish developing: * RESTful API * DBM * Improve UX by: * Mobile app * Adding depth to user added images (3d) in AR * User accessibility
## Inspiration As international students, we often have to navigate around a lot of roadblocks when it comes to receiving money from back home for our tuition. Cross-border payments are gaining momentum with so many emerging markets. In 2021, the top five recipient countries for remittance inflows in current USD were India (89 billion), Mexico (54 billion), China (53 billion), the Philippines (37 billion), and Egypt (32 billion). The United States was the largest source country for remittances in 2020, followed by the United Arab Emirates, Saudi Arabia, and Switzerland. However, Cross-border payments face 5 main challenges: cost, security, time, liquidity & transparency. * Cost: Cross-border payments are typically costly due to costs involved such as currency exchange costs, intermediary charges, and regulatory costs. -Time: most international payments take anything between 2-5 days. -Security: The rate of fraud in cross-border payments is comparatively higher than in domestic payments because it's much more difficult to track once it crosses the border. * Standardization: Different countries tend to follow a different set of rules & formats which make cross-border payments even more difficult & complicated at times. * Liquidity: Most cross-border payments work on the pre-funding of accounts to settle payments; hence it becomes important to ensure adequate liquidity in correspondent bank accounts to meet payment obligations within cut-off deadlines. ## What it does Cashflow is a solution to all of the problems above. It provides a secure method to transfer money overseas. It uses the checkbook.io API to verify users' bank information, and check for liquidity, and with features such as KYC, it ensures security in enabling instant payments. Further, it uses another API to convert the currencies using accurate, non-inflated rates. Sending money: Our system requests a few pieces of information from you, which pertain to the recipient. After having added your bank details to your profile, you will be able to send money through the platform. The recipient will receive an email message, through which they can deposit into their account in multiple ways. Requesting money: By requesting money from a sender, an invoice is generated to them. They can choose to send money back through multiple methods, which include credit and debit card payments. ## How we built it We built it using HTML, CSS, and JavaScript. We also used the Checkbook.io API and exchange rate API. ## Challenges we ran into Neither of us is familiar with backend technologies or react. Mihir has never worked with JS before and I haven't worked on many web dev projects in the last 2 years, so we had to engage in a lot of learning and refreshing of knowledge as we built the project which took a lot of time. ## Accomplishments that we're proud of We learned a lot and built the whole web app as we were continuously learning. Mihir learned JavaScript from scratch and coded in it for the whole project all under 36 hours. ## What we learned We learned how to integrate APIs in building web apps, JavaScript, and a lot of web dev. ## What's next for CashFlow We were having a couple of bugs that we couldn't fix, we plan to work on those in the near future.
losing
## Inspiration: The inspiration for this project was finding a way to incentivize healthy activity. While the watch shows people data like steps taken and calories burned, that alone doesn't encourage many people to exercise. By making the app, we hope to make exercise into a game that people look forward to doing rather than something they dread. ## What it does Zepptchi is an app that allows the user to have their own virtual pet that they can take care of, similar to that of a Tamagotchi. The watch tracks the steps that the user takes and rewards them with points depending on how much they walk. With these points, the user can buy food to nourish their pet which incentivizes exercise. Beyond this, they can earn points to customize the appearance of their pet which further promotes healthy habits. ## How we built it To build this project, we started by setting up the environment on the Huami OS simulator on a Macbook. This allowed us to test the code on a virtual watch before implementing it on a physical one. We used Visual Studio Code to write all of our code. ## Challenges we ran into One of the main challenges we faced with this project was setting up the environment to test the watch's capabilities. Out of the 4 of us, only one could successfully install it. This was a huge setback for us since we could only write code on one device. This was worsened by the fact that the internet was unreliable so we couldn't collaborate through other means. One other challenge was ## Accomplishments that we're proud of Our group was most proud of solving the issue where we couldn't get an image to display on the watch. We had been trying for a couple of hours to no avail but we finally found out that it was due to the size of the image. We are proud of this because fixing it showed that our work hadn't been for naught and we got to see our creation working right in front of us on a mobile device. On top of this, this is the first hackathon any of us ever attended so we are extremely proud of coming together and creating something potentially life-changing in such a short time. ## What we learned One thing we learned is how to collaborate on projects with other people, especially when we couldn't all code simultaneously. We learned how to communicate with the one who *was* coding by asking questions and making observations to get to the right solution. This was much different than we were used to since school assignments typically only have one person writing code for the entire project. We also became fairly well-acquainted with JavaScript as none of us knew how to use it(at least not that well) coming into the hackathon. ## What's next for Zepptchi The next step for Zepptchi is to include a variety of animals/creatures for the user to have as pets along with any customization that might go with it. This is crucial for the longevity of the game since people may no longer feel incentivized to exercise once they obtain the complete collection. Additionally, we can include challenges(such as burning x calories in 3 days) that give specific rewards to the user which can stave off the repetitive nature of walking steps, buying items, walking steps, buying items, and so on. With this app, we aim to gamify a person's well-being so that their future can be one of happiness and health.
## Inspiration In today's fast-paced world, highly driven individuals often overwork themselves without regard for how it impacts their health, only experiencing the consequences *when it is too late*. **AtlasAI** aims to bring attention to these health issues at an early stage, such that our users are empowered to live their best lives in a way that does not negatively impact their health. ## What it does We realized that there exists a gap between today's abundance of wearable health data and meaningful, individualized solutions which users can implement. For example, many smart watches today are saturated with metrics such as *sleep scores* and *heart rate variability*, many of which actually mean nothing to their users in practice. Therefore, **AtlasAI** aims to bridge this gap to finally **empower** our users to use this health data to enhance the quality of their lives. Using our users' individual health data, **AtlasAI** is able to: * suggest event rescheduling * provide *targeted*, *actionable* feedback * recommend Spotify playlists depending on user mood ## How we built it Our frontend was built with `NextJS`, with styling from `Tailwind` and `MaterialUI`. Our backend was built with `Convex`, which integrates technologies from `TerraAPI`, `TogetherAI` and `SpotifyAPI`. We used a two-phase approach to fine-tune our model. First, we utilized TogetherAI's base models to generate test data (a list of rescheduled JSON event objects for the day). Then, we picked logically sound examples to fine-tune our model. ## Challenges we ran into In the beginning, our progress was extremely slow as **AtlasAI** integrates so many new technologies. We only had prior experience with `NextJS`, `Tailwind` and `MaterialUI`, which essentially meant that we had to learn how to create our entire backend from scratch. **AtlasAI** also went through many integrations throughout this weekend as we strove to provide the best recommendations for our users. This involved long hours spent in fine-tuning our `TogetherAI` models and testing out features until we were satisfied with our product. ## Accomplishments that we're proud of We are extremely proud that we managed to integrate so many new technologies into **AtlasAI** over the course of three short days. ## What we learned In the development realm, we successfully mastered the integration of several valuable third-party applications such as Convex and TogetherAI. This expertise significantly accelerated our ability to construct lightweight prototypes that accurately embody our vision. Furthermore, we honed our collaborative skills through engaging in sprint cycles and employing agile methodologies, which collectively enhanced our efficiency and expedited our workflow. ## What's next for AtlasAI Research indicates that health data can reveal critical insights into health symptoms like depression and anxiety. Our goal is to delve deeper into leveraging this data to furnish enhanced health insights as proactive measures against potential health ailments. Additionally, we aim to refine lifestyle recommendations for the user's calendar to foster better recuperation.
\*Github Repo: <https://github.com/jas6zhang/Unravel> ## Inspiration ☁️ As we move on from the pandemic and return to workplaces and schools, it is getting easier to get caught up with the distractions of daily life. Being students ourselves, we have experienced the pressure of balancing multiple things at the same time. Hence, we realize the importance of taking time to reflect on our experiences to gain deeper insights into, and keep in touch with our emotional well-being. That's what inspired us to create Unravel—a functional application that aids as a companion in your daily journaling. Like a string that slowly unravels and becomes free of its burdens, we hope to achieve that same feeling with our app in healing the minds and hearts of people. ## What it does 💁‍♂️ Unravel is a modern take on daily journaling. Users are able to write a short journal entry every day with guidance provided by randomly generated prompts. Then, our trained NLP model will perform sentiment analysis and keyword extraction to identify the mood and key topics of the entry that day. Through fun & unique historical data visualization, users can look back at their journals and embrace the highs and lows of their journey thus far. The scope is limitless. Unravel could be used to identify core themes in your life and your sentiment behind them (even better than you may have believed), or even help you unravel that thought you were stuck on! We hope this platform makes journaling just a little bit easier, and that it aids in unravelling your life ahead. ## How we built it 🔨 The design + conceptualization occurred on Figma and the front end of the application was built using React, Tailwind & ChakraUI. Our NLP model was implemented using Cohere’s NLP toolkit and trained using multiple open source datasets and extensive preprocessing. After a journal entry is inputted, the model is able to accurately analyze the sentiment and extracts keywords based on the text. Information such as the journal entry, sentiment values, and an associated timestamp is then stored onto a Google Cloud Firestore database. The overall backend was created using the Flask framework that ties the application together with additional features such as user authentication. ## Accomplishments that we're proud of 💪 What started as a team of individuals who were still developing strange ideas hours into the hackathon, turned out to produce results beyond our initial expectations. None of us had any experience with NLP technology so there was a deep learning curve in our implementation. As a group, we are incredibly proud to finish our project and one that we feel good about. We collaborated extremely well and ultimately, produced a project that we believe is both functional and inclusive. ## What we learned 🧠 Many of our group members were not familiar with technologies such as Firebase, ChakraUI and were exposed to the power of Large Language Models. This hackathon was a perfect way to dive deep into all of these technologies. ## What's Next 💼 Unravel has the potential to disrupt the way journaling is done. There are many applications in the market where users can write journal entries comfortably, but none of them incentivize the user to go back and reflect on their emotional well-being. Reflecting upon how you reflect is an undervalued skill. By continuing to use the power of Cohere’s API, we aim to improve upon context-based semantic analysis as well as true natural language processing. Instead of classification to a finite set of labels, we want to provide our customers with deep analysis into their thought patterns. Additional features were actually conceptualized as well, but due to time and scope constraints, they were not implemented. One such example would be a support chat bot to converse with the user, and would be recommended when a trend of negative emotions is indicated. With our project, we want everyone to unravel.
partial
## Inspiration As undergraduate students, we've all endured the pain of having to download 5+ different messaging apps just to get involved within our campus communities. Some clubs/classes/labs use Discord, some use GroupMe, some use Messenger, and the list goes on. This can get cumbersome, and sometimes just downright annoying. Our vision is to create a platform that encourages community engagement through one single and easy to use app! This platform is not only useful for students, but various other discourse communities in general. Here are just a few examples: activist groups, software engineers, outdoors-persons, concert-goers, gym-goers, astrophiles, python-lovers, and so many more. Being connected within your community(s) has never been easier! ## What it does What CONNECTED does is allow users to voluntarily choose what communities they are involved with, and then be able to interact within their community with ease. Joining one app, users are able to search for others within their community(s), directly or group message them, post discussions and/or reply to other discussion posts, and join groups! ## How we built it We decided to go the web-app route: Python (flask) for the API (backend; application programming interface), Google Firebase for cloud data storage, and JavaScript (reactjs, nextjs) for the UI/UX (frontend; user interface & experience). We followed a common software engineering technique for project management: assigning and completing tickets via GitHub. ## Challenges we ran into Our main challenge was that none of us have any frontend experience whatsoever. Due to this, I, Justin Ventura, took one for the team and spent a lot of time learning React in order to provide our app with some sort of UI. On top of this, we are new to hackathons so there was a bit of a learning curve for us. ## Accomplishments that we're proud of Justin Ventura: I am proud of the fact I was able to learn React, and create a half-decent looking UI! While I definitely prefer backend work, I'm happy that I was able to learn React on the spot. Michael Dacanay: I am proud of the the work that the team did to grind it out. I learned some cool things about the Flask dev process and Google Firebase's document/collection model for database. ## What we learned Justin Ventura: I learned a whole lot about React, and front-end development as a whole. I also learned that drawing mocks on an iPad really helps as well! Michael Dacanay: I really enjoyed working with the team and made plenty of memories! On the tech side, I learned a bit about both the frontend and backend frameworks we used, and the debugging needed to resolve issues. Bokai Bi: I worked with both Flask and Google Firebase for the first time in this project and learning to use them was super interesting. Being in charge of designing the database and API interactions also gave me insight on how large systems are created and managed. This was my first Hackathon but had lots of fun and made a lot of new cool people! ## What's next for CONNECTED Currently CONNECTED has some of the key capabilities not fully implemented due to hackathon time constraint, and due to our lack of UI/UX experience. So finishing up the frontend would be a good first step. After that, there could be so many cool and useful features to be added with more time such as: a recommendation system for finding groups, profile pages for users to express themselves, a mechanism to setup events, and perhaps even a platform for users passionate about their community to share ideas/solutions/polls. Also, a solution to a problem you may have thought about while reading: how to keep toxic/hateful users and trolls from hurting communities. This is a tough problem in general for any sort of social media platforms, but with Natural Language Processing and Sentiment Analysis systems, this problem could be kept at bay.
## Inspiration I wanted to learn to create a website from scratch solely using Figma and plugins. ## What it does My website displays cute emojis alongside motivating text that says "I'm Fantas-tech" to alleviate effects of imposter syndrome in the tech industry. ## How we built it This simple website was designed on Figma. After the designs were created, I used the Locofy plugin to help me code and deploy my site on to <http://imfantas.tech> ## Challenges we ran into The Locofy plugin came with so many features it was overwhelming to utilize. Additionally, there were some issues around responsive renderings that were difficult to debug. ## Accomplishments that we're proud of I designed and deployed the website! ## What we learned Throughout the building process, I learned to use auto-layout and dev-mode on Figma! ## What's next for I'm Fantas-tech (<http://imfantas.tech>) I'd like to implement additional motivational text in the next iteration and enhance the responsiveness of the website!
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
losing
## Inspiration We built LOLA to let anyone learn accurate, interesting information from Youtube in a more accessible way. We've built a few subject-specific tutors before for classes like AP World History and skills for financial literacy, and our goal in building LOLA was to let student's have the same kind of personal 1:1 tutoring experience on any subject they'd like to learn about, from coding to cookie-making. ## What it does LOLA starts off by understanding what its student wants to learn about. It asks questions trying to specify what kind of Youtube playlist it should use, finds some that are appropriate for the student's topic of interest and skill level, and shares those with the student, letting them select/confirm the playlist they want to learn about. Then, LOLA forms a memory of the playlist as a whole and of each video in the playlist. This memory is two-part: its topic/fact-based, so that LOLA can use it to answer questions answerable by the playlist, and its playlist and video organized, which means LOLA can select the playlists and/or videos that are most useful to helping a student and even intelligently query its own memory, using those video or playlist memories as navigation to answer a question requiring multiple different kinds of information. LOLA uses this memory to answer questions for students in a digestable way, and even play relevant video clips for them, as short-form and on topic as a Tik Tok. So, overall, LOLA is a new form of learner with structured memory and a more intuitive understanding of the Youtube landscape, and its also a new form of AI tutor with smarter memory retrievals and a student-oriented and personalized presentation style. It seeks to make the majority of learning topics dramatically more accessible by creating a knowledgeable friend in that area, trained on entire playlists. ## How we built it LOLA is built on two tracks. The first track is for understanding Youtube, according to a student's topic of interest and experience level, and the second is for delivering that information efficiently and digestably to its students. The first track, comprehension, is built of the following chain of actions: 1. LOLA decides what the student wants to learn about, in conversation with them 2. At some point in the conversation, it outputs in a specialized and parsable format, the most relevant Youtube query and the behavior of the student notetaker. 3. The chain picks up on the fact that a query has been made, and finds the top Youtube playlists for that query. 4. It shows the playlists to the students, and lets them choose their preference. The second track, tutoring, is built of the following chain: 1. LOLA finds the most relevant video in a playlist for a query, using Weaviate natural language vector similarity to search through video search labels (texts containing descriptions of what's covered in a video). 2. LOLA finds the most relevant topic from that video by vector searching that video's topic chunks. 3. It uses the information its retrieved to answer student questions with highly accurate, engaging knowledge. ## Challenges we ran into Prompting chatGPT to build its own knowledge is always super difficult, especially since we set out to build an efficient (non GPT4) system that can interpret ANY kind of information. Writing prompts that successfully build topic-based knowledge, the kind a tutor of that area would need to know, on any dataset is very difficult to do. We also ran into challenges setting up a demo of LOLA, since our backgrounds are more in AI, prompting and chaining than any kind of user-project deployment. This was definitely one of our greater difficulties. ## Accomplishments that we're proud of We're proud that LOLA can teach anyone anything from cookie-making to caching, and we're proud of its new organized memory system, which we think will contribute to the progression of sub-LLM AI systems into the future. We're most proud of the ease LOLA brings to learners. As students, we feel the biggest impact we've made is saving ourselves and other students hours on searching for information that could be intelligently synthesized almost instantly. ## What we learned Lots. We learned most on the language side of the project. We learned how Youtube queries represent certain meanings and most of all we learned a ton about developing frontends. :P ## What's next for Lola (Learn online, like actually) Next for LOLA, we're going to try to improve efficiency with an LLM-specific cache, and make complex memory retrieval actually work in our project. We'll be releasing it to the public in the coming weeks, once we solve efficiency concerns and add efficient voice.
## Inspiration We were inspired by the **protégé effect**, a psychological phenomenon where teaching others helps reinforce the student's own understanding. This concept motivated us to create a platform where users can actively learn by teaching an AI model, helping them deepen their comprehension through explanation and reflection. We wanted to develop a tool that not only allows users to absorb information but also empowers them to explain and teach back, simulating a learning loop that enhances retention and understanding. ## What it does Protégé enables users to: * **Create lessons** on any subject, either from their own study notes or with AI-generated information. * **Teach** the AI by explaining concepts aloud, using real-time speech-to-text conversion. * The AI then **evaluates** the user’s explanation, identifies errors or areas for improvement, and provides constructive feedback. This helps users better understand the material while reinforcing their knowledge through active participation. * The system adapts to user performance, offering **customized feedback** and lesson suggestions based on their strengths and weaknesses. ## How we built it Protégé was built using the **Reflex framework** to manage the front-end and user interface, ensuring a smooth, interactive experience. For the back-end, we integrated **Google Gemini** to generate lessons and evaluate user responses. To handle real-time speech-to-text conversion, we utilized **Deepgram**, a highly accurate speech recognition API, allowing users to speak directly to the AI for their explanations. By connecting these technologies through state management, we ensured seamless communication between the user interface and the AI models. ## Challenges we ran into One of the main challenges was ensuring **seamless integration between the AI model and the front-end** so that lessons and feedback could be delivered in real time. Any lag would have disrupted the user experience, so we optimized the system to handle data flow efficiently. Another challenge was **real-time speech-to-text accuracy**. We needed a solution that could handle diverse speech patterns and accents, which led us to Deepgram for its ability to provide fast and accurate transcriptions even in complex environments. ## Accomplishments that we're proud of We’re particularly proud of successfully creating a platform that allows for **real-time interaction** between users and the AI, providing a smooth and intuitive learning experience. The integration of **Deepgram for speech recognition** significantly enhanced the teaching feature, enabling users to explain concepts verbally and receive immediate feedback. Additionally, our ability to **simulate the protégé effect**—where users reinforce their understanding by teaching—marks a key accomplishment in the design of this tool. ## What we learned Throughout this project, we learned the importance of **real-time system optimization**, particularly when integrating AI models with front-end interfaces. We also gained valuable experience in **balancing accuracy with performance**, ensuring that both lesson generation and speech recognition worked seamlessly without compromising user experience. Additionally, building a system that adapts to users’ teaching performance taught us how crucial **customization and feedback** are in creating effective educational tools. ## What's next for Protégé Our next steps include: * Developing **personalized lesson plans** that adapt based on user performance in teaching mode, making learning paths more tailored and effective. * Adding **gamified progress tracking**, where users can earn achievements and track their improvement over time, keeping them motivated. * Introducing **community and peer learning** features, allowing users to collaborate and share their teaching experiences with others. * Building a **mobile version** of Protégé to make the platform more accessible for learning on the go.
## Inspiration The inspiration for Hivemind stemmed from personal frustration with the quality of available lectures and resources, which were often insufficient for effective learning. This led us to rely entirely on ChatGPT to teach ourselves course material from start to finish. We realized the immense value of tailored responses and the structured learning that emerged from the AI interactions. Recognizing the potential, this inspired the creation of a platform that could harness collective student input to create smarter, more effective lessons for everyone. ## What it does Hivemind is an AI-powered learning platform designed to empower students to actively engage with their course material and create personalized, interactive lessons. By allowing students to input course data such as lecture slides, notes, and assignments, Hivemind helps them optimize their learning process through dynamic, evolving lessons. As students interact with the platform, their feedback and usage patterns inform the system, organically improving and refining the content for everyone. This collaborative approach transforms passive learning into an active, community-driven experience, creating smarter lessons that evolve based on the collective intelligence and needs of all users. ## How we built it * **Backend**: Developed with Django and Django REST Framework to manage data processing and API requests. * **Data Integration**: Used PyMuPDF for text extraction and integrated course materials into a cohesive database. * **Contextual Search**: Implemented Chroma for similarity searches to enhance lesson relevance and context. * **LLM Utilization**: Leveraged Cerebras and TuneAI to transform course content into structured lessons that evolve with user input. * **Frontend**: Created a React-based interface for students to access lessons and contribute feedback. * **Adaptive Learning**: Built a system that updates lessons dynamically based on collective interactions, guiding them towards an optimal state. ## Challenges we ran into * Getting RAG to work with Tune * Creating meaningful inferences with the large volume of data * Integrating varied course materials into a unified, structured format that the LLM could effectively utilize * Ensuring that lessons evolve towards an optimal state based on diverse student interactions and inputs * Sleep deprivation ## Accomplishments that we're proud of * Functional Demo * Integration of advanced technologies * Team effort ## What we learned Throughout the development of Hivemind, we gained valuable insights into various advanced topics, including large language models (LLMs), retrieval-augmented generation (RAGs), AI inference, and fine-tuning techniques. We also deepened our understanding of: * Tools such as Tune and Cerebras * Prompt Engineering * Scalable System Design ## What's next for Hivemind * Easy integration with all LMS for an instant integration with any courses * Support different types of courses (sciences, liberal arts, languages, etc.) * Train on more relevant data such as research studies and increase skill level of the model * Create an algorithm that can generate a large amount of lessons and consolidate them into one optimal lesson * Implement a peer review system where students can suggest improvements to the lessons, vote on the best modifications, and discuss different approaches, fostering a collaborative learning environment
losing
## Inspiration Games on USB! Back in the good old day when we had actual handheld console games, we used to carry around game cartridges with us the whole time. Cartridges not only carried the game data itself, but also your save files, the game progress and even your childhood memories. As a 20-something young adult who played too much Pokemon in his childhood days, I couldn't resist building a homage project in a gaming-themed hackathon. With some inspiration from the USB club, I decided to build a USB game cartridge that can be plugged into any computer to play the games stored in it. Just like a game cartridge! You can save the game, carry them around, and wherever you can plug a USB into, you can continue playing the game on any computer. ## What it does Visit <https://uboy.esinx.net> and you will see... nothing! Well, it's because that's how consoles worked back in those days. Without a cartridge, you can't really play anything, can you? The same applies here! Without a USB, you won't be able to play any games. But once you have your games & saves ready on a USB, you'll be able to load up your game on uBoy and enjoy playing classic console games in your browser! Everything from the game data to the save files are stored in the USB, so you can carry them around and play them on any computer with a browser. ## How we built it uBoy is heavily inspired by the [wasmboy](wasmboy.app) project, which brings Nintendo's GameBoy(GB) & GameBoy Color(GBC) emulation into the web. WASM, or WebAssembly is a technology that allows assembly code to be executed in a browser environment, which unleashes the capabilities of your computer. Previously written emulation projects were easily ported to the web thanks to WASM. Saving to USB works using the File System Access API, a standard web API that allows web applications to read and write files on the user's local file system. This API is still in development, but it's already available in Chrome and Edge. Regardless, many browsers with modern web API support should be able to play uBoy! ## Challenges we ran into * WASM is not the most efficient way to run an emulator. It's not as fast as native code, and it's not as easy to debug as JavaScript. It's a trade-off between performance and portability. * The File System Access API is still in development, so there is not much way to interact with the USB itself. * Its alternative, WebUSB is available but lacks the support for mass storage devices like the ones provided by the USB Club. ## Accomplishments that we're proud of * Games on USB! * Saving on USB! * GameBoy & GameBoy Color emulation * Cute UI to make you feel more nostalgic ## What we learned * WebAssembly * File System Access API * GameBoy & GameBoy Color emulation + Game memory management * How to create an interaction for web based gaming ## What's next for uBoy * DS Emulation (hopefully) * Trading using USB! + What if you plug multiple USBs into the computer?
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration In a world where multiplayer gaming experiences are feeling more and more isolating, we wanted to bring personal connection back to multiplayer gaming. To do so we analyzed what has worked best in the past, attempting to capture that unique element which made the multiplayer games of our childhoods so incredibly special. In the end, we drew on several sources of inspiration for this project, taking our favorite components of several gaming-social fads to create what we believe could be a retro gaming experience suited for the modern world. But to achieve our main goal of brining meaningful connection back to multiplayer gaming, we aimed to incorporate the essence of one of our favorite mechanics of gaming-past: the inter-personal touch of gameboy era Pokémon link cable trading. At the very heart of this lofty goal is the humble USB drive, which served as the perfect medium for a gaming platform specifically designed to bring people together. And thus, PlayPort was born. ## What it does In essence, PlayPort is a medium for retro-style games: trading card games, platformers, and more. Our hope is that game developer's will recognize that the limitations of the USB drive serve as the perfect guiding force for more innovative collaborative features. By developing games which store locally on a USB, and using our multiplayer USB platform, the benefits of an expansive multiplayer network can be carried over to a gaming experience which incentivizes playing together, side-by-side. And since all app and game data is stored on the USBs themselves, players can quickly jump into a PlayPort experience by plugging their USB drives into any computer. To prove that anyone can make a fun and engaging game on PlayPort, we designed Flash Kingdom, the first game in PlayPort's library. The mechanics of Flash Kingdom, from both a gameplay and software perspective, were designed very specifically to incentivize human connection. The game itself is divided into two main components, acquiring cards and using your virtual deck to battle others. All game data is stored locally on a USB drive and multiplayer interactions, such as trading cards and battling, require that both players plug their USBs into the same computer. The USB drives are then able to communicate with each other to execute player actions and gameplay mechanics. The choice to use USBs as a medium made a lot of sense from the get go to encourage in person connection. But it was to our pleasant surprise that the USB quickly became an integral part of the core gameplay mechanics. Facilitating, and not limiting, enjoyment. ## How we built it We've designed a safe, reliable, self-contained, and USB-centric multiplayer experience by packaging a Flask based react application directly onto a USB drive. This novel architecture allows for USB drives to communicate directly with each other, using the computer as only a medium for information exchange and gameplay. Additionally, we've integrated symmetric encryption techniques to prevent tampering and secure player information. Our framework allows for USB drives to self-encrypt and self-decrypt their data while interfacing with a laptop, allowing for a safe ejection at any point throughout the game. ## Challenges we ran into We ran into several challenges throughout this project, including the implementation of data encryption, safe ejection techniques, and overall game design challenges. ## Accomplishments that we're proud of and what we learned We're incredibly proud of everything we've accomplished. With no prior game development experience, we've developed our own game development framework with custom graphics and fully functional USB interfacing as a proof of concept for our idea. ## What's next for PlayPort? We'd love to collaborate with game developers to push the limits of our platform and bring our vision to life.
winning
## What inspired us to build it Guns are now the leading cause of death among American children and teens, with 1 in every 10 gun deaths occurring in individuals aged 19 or younger. School shootings, in particular, have become a tragic epidemic in the U.S., underscoring the urgent need for enhanced safety measures. Our team united with a shared vision to leverage AI technology to improve security in American schools, helping to protect children and ensure their safety. ## What it does Our product leverages advanced AI technology to enhance school safety by detecting potential threats in real-time. By streaming surveillance footage, our AI system can identify weapons, providing instant alerts to security personnel and administrators. In addition to visual monitoring, we integrate audio streaming to analyze changes in sentiment, such as raised voices or signs of distress. This dual approach—combining visual and auditory cues—enables rapid response to emerging threats. ## How we built it We partnered with incredible sponsors—Deepgram, Hyperbolic, Groq, and Fetch.AI—to develop a comprehensive security solution that uses cutting-edge AI technologies. With their support, we were able to conduct fast AI inference, deploy an emergency contact agent, and create intelligent systems capable of tracking potential threats and key variables, all to ensure the safety of our communities. For real-time data processing, we utilized Firebase and Convex to enable rapid write-back and retrieval of critical information. Additionally, we trained our weapon detection agent using Ultralytics YOLO v8 on the Roboflow platform, achieving an impressive ~90% accuracy. This high-performance detection system, combined with AI-driven analytics, provides a robust safety infrastructure capable of identifying and responding to threats in real time. ## Challenges we ran into Streaming a real-time AI object detection model with both low latency and high accuracy was a significant challenge. Initially, we experimented with Flask and FastAPI for serving our model, followed by trying AWS and Docker to improve performance. However, after further optimization efforts, we ultimately integrated Roboflow.js directly in the browser using a Native SDK. This approach gave us a substantial advantage, allowing us to run the model efficiently within the client environment. As a result, we achieved the ability to track weapons quickly and accurately in real time, meeting the critical demands of our security solution. ## Accomplishments that we're proud of We are incredibly proud of the features our product offers, providing a comprehensive and fully integrated security experience. Beyond detecting weapons and issuing instant alerts to law enforcement, faculty, and students through AI-powered agents, we also implemented extensive sentiment analysis. This enables us to detect emotional escalations that may signal potential threats. All of this is supported by real-time security data displays, ensuring that key decision-makers are always informed with up-to-the-minute information. Our system seamlessly brings together cutting-edge AI and real-time data processing to deliver a robust, proactive security solution. ## What we learned We learned that the night is darkest right before the dawn... and that we need to persevere and be steadfast as a team to see our vision come to fruition. ## What's next for Watchdog We want to get incorporated in the American school system!
## Inspiration According to the Washington Post (June 2023), since Columbine in 1999, more than 356,000 students in the U.S. have experienced gun violence at school. Students of all ages should be able to learn comfortably and safely within the walls of their classroom. Quality education is a UN Sustainable Development goal and can only be achieved when the former becomes a reality. As college students, especially in the midst of the latest UNC-Chapel Hill school shooting, we understand threats lie even within the safety of our campus and have grown up knowing the tragedies of school shootings. This problem is heavily influenced by politics and thus there is an unclear timeline for concrete and effective solutions to be implemented. The intention of our AI model is to contribute a proactive approach that requires only a few pieces of technology but is capable of an immediate response to severe events. ## What it does Our machine learning model is trained to recognize active threats with displayed weapons. When the camera senses that a person has a knife, it automatically calls 911. We also created a machine learning model that uses CCTV camera footage of perpetrators with guns. Specifically, this model was meant to be catered towards guns to address the rising safety issues in education. However, for the purpose of training our model and safety precautions, we could not take training data pictures with a gun and thus opted for knives. We used the online footage as a means to also train on real guns. ## How we built it We obtained an SD card with the IOS for Raspberry Pi, then added the Viam server to the Raspberry Pi. Viam provides a platform to build a machine learning model on their server. We searched the web and imported CCTV images of people with and without guns and tried to find a wide variety of these types of images. We also integrated a camera with the Raspberry Pi to take additional images of ourselves with a knife as training data. In our photos we held the knife in different positions, different lighting, and different people's hands. The more variety in the photos provided a stronger model. Using our data from both sources and the Viam platform we went through each image and identified the knife or gun in the picture by using a border bounding box functionality. Then we trained two separate ML models, one that would be trained off the images in CCTV footage, and one model using our own images as training data. After testing for recognition, we used a program that connects the Visual Studio development environment to our hardware. We integrated Twilio into our project which allowed for an automated call feature. In our program, we ran the ML model using our camera and checked for the appearance of a knife. As a result, upon detection of a weapon, our program immediately alerts the police. In this case, a personal phone number was used instead of authorities to highlight our system’s effectiveness. ## Challenges we ran into Challenges we ran into include connection issues, training and testing limitations, and setup issues. Internet connectivity presented as a consistent challenge throughout the building process. Due to the number of people on one network at the hackathon, we used a hotspot for internet connection, and the hotspot connectivity was often variable. This led to our Raspberry Pi and Viam connections failing, and we had to restart many times, slowing our progress. In terms of training, we were limited in the locations we could train our model in. Since the hotspot disconnected if we moved locations, we could only train the model in one room. Ideally, we would have liked to train in different locations with different lighting to improve our model accuracy. Furthermore, we trained a machine learning model with guns, but this was difficult to test for both safety reasons and a lack of resources to do so. In order to verify the accuracy of our model, it would be optimal to test with a real gun in front of a CCTV camera. However, this was not feasible with the hackathon environment. Finally, we had numerous setup issues, including connecting the Raspberry Pi to the SSH, making sure the camera was working after setup and configuration, importing CCTV images, and debugging. We discovered that the hotspot that we connected the Raspberry Pi and the laptop to had an apostrophe in its name, which was the root of the issue with connecting to the SSH. We solved the problem with the camera by adding a webcam camera in the Viam server rather than a transform camera. Importing the CCTV images was a process that included reading the images into the Raspberry Pi in order to access them in Viam. Debugging to facilitate the integration of software with hardware was achieved through iteration and testing. We would like to thank Nick, Khari, Matt, and Hazal from Viam, as well as Lizzie from Twilio, for helping us work through these obstacles. ## Accomplishments that we're proud of We're proud that we could create a functional and impactful model within this 36 hour hackathon period. As a team of Computer Science, Mechanical Engineering, and Biomedical Engineering majors, we definitely do not look like the typical hackathon theme. However, we were able to use our various skill sets, from hardware analysis, code compilation, and design to achieve our goals. Additionally, as it was our first hackathon, we developed a completely new set of skills: both soft and technical. Given the pressure, time crunch, and range of new technical equipment at our fingertips, it was an uplifting experience. We were able to create a prototype that directly addresses a topic that is dear to us, while also communicating effectively with working professionals. ## What we learned We expanded our skills with a breadth of new technical skills in both hardware and software. We learned how to utilize a Raspberry Pi, and connect this hardware with the machine learning platform in Viam. We also learned how to build a machine learning model by labeling images, training a model for object detection, and deploying the model for results. During this process, we gained knowledge about what images were deemed good/useful data. On the software end, we learned how to integrate a Python program that connects with the Viam machine learning platform and how to write a program involving a Twilio number to automate calling. ## What's next for Project LearnSafe We hope to improve our machine learning model in a multifaceted manner. First, we would incorporate a camera with better quality and composition for faster image processing. This would make detection in our model more efficient and effective. Moreover, adding more images to our model would amplify our database in order to make our model more accurate. Images in different locations with different lighting would improve pattern recognition and expand the scope of detection. Implementing a rotating camera would also enhance our system. Finally, we would test our machine learning model for guns with CCTV, and modify both models to include more weaponry. Today’s Security. Tomorrow’s Education.
## Inspiration We were trying for an IM cross MS paint experience, and we think it looks like that. ## What it does Users can create conversations with other users by putting a list of comma-separated usernames in the To field. ## How we built it We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database. ## Challenges we ran into Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging. ## Accomplishments that we're proud of Getting a (mostly) fully up-and-running chat client up in 24 hours! ## What we learned We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app. ## What's next for SketchWave We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
partial
## Inspiration Personal experience as a university student, not being able to keep track of our groceries bought and when they expire caused a lot of food & money waste since they went bad before being used. We wanted a seamless app that helps individuals and families know what they have in their fridge, meal inspiration based on food that has the shortest shelf-life remaining, and an integration to purchase items (stored in memory of previously purchased) only when we've run out in our fridge (prevent over-buying). ## What it does The app allows users to take a photo of their receipt or items purchased and sends this information to "their fridge" database. Each item is linked to it's shelf life time, and is displayed with an easy to read "days left" icon. Green means the product will last for 7+ days, orange means there are 3-7 days left, and red means the food is going bad soon in <3 days. The recipes tab creates meals based off of items that will soon expire in your fridge. The addition "buy" function allows the user to buy items they ran out of, or need for a specific recipe. ## How we built it Our whole tech stack includes AI, cloud computing and blockchain technology. For backend, we used Google Cloud serverless services like cloud function and MongoDB to support the features that users do not usually use. Also we implement a server with a GCP compute engine to call Google Vision API. Besides, we store images to Firebase database for Google Vision API. We also used Algorand blockchain as a payment method. For frontend, we used React Native to build the Mobile App. ## Challenges we ran into * hard to debug when we are very tired. * implement blockchain in a Cloud compute engine. ## Accomplishments that we're proud of * creating a functional app using blockchain, AI, and computer vision in the short time frame ( took us 6 hours to decide on an idea, so we were very crunched for time) ## What we learned You won't be able to change the world, so start local and think about what differences you want to make in your life. Then think big and with a strong foundation you can apply it to the world ( and incorporate interesting technologies). ## What's next for StayFresh Stay Fresh will be partnering with local and national grocery chains ( including Amazon Whole Foods delivery) to optimize the grocery delivery process.
## Inspiration Each year, over approximately 1.3 billion tonnes of produced food is wasted ever year, a startling statistic that we found to be truly unacceptable, especially for the 21st century. The impacts of such waste are wide spread, ranging from the millions of starving individuals around the world that could in theory have been fed with this food to the progression of global warming caused by the greenhouse gases released as a result of emissions from decaying food waste. Ultimately, the problem at hand was one that we wanted to fix using an application, which led us precisely to the idea of Cibus, an application that helps the common householder manage the food in their fridge with ease and minimize waste throughout the year. ## What it does Essentially, our app works in two ways. First, the app uses image processing to take pictures of receipts and extract the information from it that we then further process in order to identify the food purchased and the amount of time till that particular food item will expire. This information is collectively stored in a dictionary that is specific to each user on the app. The second thing our app does is sort through the list of food items that a user has in their home and prioritize the foods that are closest to expiry. With this prioritized list, the app then suggests recipes that maximize the use of food that is about to expire so that as little of it goes to waste as possible once the user makes the recipes using the ingredients that are about to expire in their home. ## How we built it We essentially split the project into front end and back end work. On the front end, we used iOS development in order to create the design for the app and sent requests to the back end for information that would create the information that needed to be displayed on the app itself. Then, on the backend, we used flask as well as Cloud9 for a development environment in order to compose the code necessary to help the app run. We incorporated image processing APIs as well as a recipe API in order to help our app accomplish the goals we set out for it. Furthermore, we were able to code our app such that individual accounts can be created within it and most of the functionalities of it were implemented here. We used Google Cloud Vision for OCR and Microsoft Azure for cognitive processing in order to implement a spell check in our app. ## Challenges we ran into A lot of the challenges initially derived from identifying the scope of the program and how far we wanted to take the app. Ultimately, we were able to decide on an end goal and we began programming. Along the way, many road blocks occurred including how to integrate the backend seamlessly into the front end and more importantly, how to integrate the image processing API into the app. Our first attempts at the image processing API did not end as well as the API only allowed for one website to be searched at a time for at a time, when more were required to find instances of all of the food items necessary to plug into the app. We then turned to Google Cloud Vision, which worked well with the app and allowed us to identify the writing on receipts. ## Accomplishments that we're proud of We are proud to report that the app works and that a user can accurately upload information onto the app and generate recipes that correspond to the items that are about to expire the soonest. Ultimately, we worked together well throughout the weekend and are proud of the final product. ## What we learned We learnt that integrating image processing can be harder than initially expected, but manageable. Additionally, we learned how to program an app from front to back in a manner that blends harmoniously such that the app itself is solid on the interface and in calling information. ## What's next for Cibus There remain a lot of functionalities that can be further optimized within the app, like number of foods with corresponding expiry dates in the database. Furthermore, we would in the future like the user to be able to take a picture of a food item and have it automatically upload the information on it to the app.
## Inspiration We recognized how much time meal planning can cause, especially for busy young professionals and students who have little experience cooking. We wanted to provide an easy way to buy healthy, sustainable meals for the week, without compromising the budget or harming the environment. ## What it does Similar to services like "Hello Fresh", this is a webapp for finding recipes and delivering the ingredients to your house. This is where the similarities end, however. Instead of shipping the ingredients to you directly, our app makes use of local grocery delivery services, such as the one provided by Loblaws. The advantages to this are two-fold: first, it helps keep the price down, as your main fee is for the groceries themselves, instead of paying large amounts in fees to a meal kit company. Second, this is more eco-friendly. Meal kit companies traditionally repackage the ingredients in house into single-use plastic packaging, before shipping it to the user, along with large coolers and ice packs which mostly are never re-used. Our app adds no additional packaging beyond that the groceries initially come in. ## How We built it We made a web app, with the client side code written using React. The server was written in python using Flask, and was hosted on the cloud using Google App Engine. We used MongoDB Atlas, also hosted on Google Cloud. On the server, we used the Spoonacular API to search for recipes, and Instacart for the grocery delivery. ## Challenges we ran into The Instacart API is not publicly available, and there are no public API's for grocery delivery, so we had to reverse engineer this API to allow us to add things to the cart. The Spoonacular API was down for about 4 hours on Saturday evening, during which time we almost entirely switched over to a less functional API, before it came back online and we switched back. ## Accomplishments that we're proud of Created a functional prototype capable of facilitating the order of recipes through Instacart. Learning new skills, like Flask, Google Cloud and for some of the team React. ## What we've learned How to reverse engineer an API, using Python as a web server with Flask, Google Cloud, new API's, MongoDB ## What's next for Fiscal Fresh Add additional functionality on the client side, such as browsing by popular recipes
losing
## Inspiration Volunteering at a local hospital made us realize the true lack of information that doctors and emergency responders have regarding EMTs and ambulances outside the hospital. We noted that many of their communication systems still relied on radio to update each end, and thus they lacked the true awareness of knowing exactly where of each of the ambulences are. In addition, the details provided by EMTs can be taking up life-saving time, and thus carries its risk as well. ## What it does Our product, Pendium, allows for easy communication between ambulances and their respective hospitals. Accurate GPS allows tracking of the vehicle, while medical equipment on the ambulance measures quantitative measurements of the patient's health, in aspects including but not limited to blood pressure and heart rate. Finally, it allows ambulances to also provide qualitative communication with the use of diagnosis and message sections. ## How we built it Implemented Solace's SubPub+ event broker and MQTT to create real-time analytic solutions for emergency vehicles and patients. Integrated this with a front-end web application built using Flask that showcases the monitoring of vehicle location and patient conditions. ## Challenges we ran into Many challenges heard their heads during our project. Many logistical bugs were present, as we were not too familiar with online/server-based programming. ## Accomplishments that we're proud of * We had managed to use an event broker with a pubsub system to distribute data. * How to create web application using both Django and Flask Python ## What we learned We learned how to use an event broker, and leverage it to solve real world problems with the use of APIs and scripting. Along with this, we also learned how to use the flask framework with the SQL database. We didn’t know how to use any of these technologies, as we are highschoolers trying to break into the tech industry, but we thought it was extremely rewarding to learn these technologies. ## What's next for Pandium We hope to use actual data collected by EMTs in ambulances to better model our system around it. We hope to integrate this system with actual hospitals to increase healthcare efficiency and preparedness. We hope this relieves some of the pressure placed on the health system and doctors, especially after the pandemic.
## Inspiration Planning get-togethers with my friend group during the pandemic has been rather complicated. Some of them are immunocompromised or live with individuals that are. Because of this, we often have to text individuals to verify their COVID status before allowing them to come to a hangout. This time-consuming and unreliable process inspired me to create PandemicMeet. The theme of this hackathon is exploration and this webapp allows its users to explore without having to worry about potential COVID exposure or micromanaging their guests beforehand. ## What it does This app lets users organize events, invite other users, and verify the COVID and vaccination status of invitees. Once an event is confirmed, invited users will see the event on their account and the organizer can track their guests' statuses to verify the safety of the event. ## How we built it I built a full stack web application using a Flask back-end and Bootstrap front-end. For the database I used a sqlite3 database managed with Flask SQL Alchemy. User's data is stored in 3 separate tables: Users, User\_history and Party which store the user info (username, password, email), COVID status history and meeting info respectively. ## Challenges we ran into There were quite a few problems integrating the front-end with the back-end, but I managed to overcome them and make this app work. ## Accomplishments that we're proud of This was my first time using flask and bootstrap and I am rather pleased with the results. This is also my first college hackathon and I think that this is a decent start. ## What we learned Working on this project drastically improved my proficiency with webapp production. I learned how to use bootstrap and flask to create a full stack web application too. ## What's next for PandemicMeet I plan to improve this app by adding more security measures, email notifications, public user profiles as well as deploying on a server being accessible for people around the world.
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
losing
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
## Inspiration Since the pandemic, millions of people worldwide have turned to online alternatives to replace public fitness facilities and other physical activities. At-home exercises have become widely acknowledged, but the problem is that there is no way of telling whether people are doing the exercises accurately and whether they notice potentially physically damaging bad habits they may have developed. Even now, those habits may continuously affect and damage their bodies if left unnoticed. That is why we created **Yudo**. ## What it does Yudo is an exercise web app that uses **TensorFlow AI**, a custom-developed exercise detection algorithm, and **pose detection** to help users improve their form while doing various exercises. Once you open the web app, select your desired workout and Yudo will provide a quick exercise demo video. The closer your form matches the demo, the higher your accuracy score will be. After completing an exercise, Yudo will provide feedback generated via **ChatGPT** to help users identify and correct the discrepancies in their form. ## How we built it We first developed the connection between **TensorFlow** and streaming Livestream Video via **BlazePose** and **JSON**. We used the video's data and sent it to TensorFlow, which returned back a JSON object of the different nodes and coordinates which we used to draw the nodes onto a 2D canvas that updates every single frame and projected this on top of the video element. The continuous flow of JSON data from Tensorflow helped create a series of data sets of what different planks forms would look like. We used our own created data sets, took the relative positions of the relevant nodes, and then created mathematical formulas which matched that of the data sets. After a discussion with Sean, a MLH member, we decided to integrate OpenAI into our project by having it provide feedback based on how well your plank form is. We did so by utilizing the **ExpressJS** back-end to handle requests for the AI-response endpoint. In the process, we also used **Nodaemon**, a process for continuously restarting servers on code change, to help with our development. We also used **Axios** to send data back and forth between the front end and backend The front end was designed using **Figma** and **Procreate** to create a framework that we could base our **React** components on. Since it was our first time using React and Tensorflow, it took a lot of trial and error to get CSS and HTML elements to work with our React components. ## Challenges we ran into * Learning and implementing TensorFlow AI and React for the first time during the hackathon * Creating a mathematical algorithm that accurately measures the form of a user while performing a specific exercise * Making visual elements appear and move smoothly on a live video feed ## Accomplishments that we're proud of * This is our 2nd hackathon (except Darryl) * Efficient and even work distribution between all team members * Creation of our own data set to accurately model a specific exercise * A visually aesthetic, mathematically accurate and working application! ## What we learned * How to use TensorFlow AI and React * Practical applications of mathematics in computer science algorithms ## What's next for Yudo * Implementation of more exercises * Faster and more accurate live video feed and accuracy score calculations * Provide live feedback during the duration of the exercise * Integrate a database for users to save their accuracy scores and track their progress
## SafeWatch *Elderly Patient Fall-Detection | Automated First Responder Information Relay* Considering the increasing number of **elderly patient falls**, SafeWatch automates the responsibilities of senior resident caregivers thus relieving them of substantial time commitments. SafeWatch is a **motion detection software** which recognizes collapsed persons and losses of balance. It is coupled with an instantaneous alert system which can notify building security, off-location loved ones or first responders. Easy integration into pre-existing surveillance camera systems allows for **low implementation costs**. It is a technology which allows us to continuously keep a watchful eye on our loved ones in their old age. **Future applications** of this software include expansion into public areas for rapid detection of car crashes, physical violence, and illicit activity.
winning
## 🌍 Background Unlike the traditional profit-orientated approach in financial investing, responsible investing is a relatively new concept that expressly recognizes the importance of environmental, social, and governance aspects to the investor and the long-term health and stability of the market (Cambridge Institute for Sustainability Leadership, 2021). However, currently, ESG does not have a standardized evaluation system that allows investors to quickly determine the potential of the financial products. ## ❣️ Inspiration More recently, some have claimed that ESG standards, in addition to their social value, might protect investors from the crises that arise when businesses that operate in a hazardous or immoral manner are finally held responsible for their effects. Examples include the 2010 Gulf of Mexico oil disaster by BP and the billion-dollar emissions scandal at Volkswagen, which both had a negative impact on the stock values of their respective corporations. (Investopedia, 2022). Therefore, the demand of creating an easy-to-use ESG evolution tool for everybody is essential to address the stigma that investing, saving, and budgeting are only for privileged populations. ## ⚙️ Solution Inspired by the current uncertainty about ESG evaluation methods, our team proposed and implemented an online ESG evaluation platform with our recently developed algorithms called Stock Stalker that allow investors to search, manage, and see the overall ESG performance of the selected stocks associated with a built-in recommendation system. To See. To Learn. To Apply. To Earn. To Contribute. Stock Stalker redefines what it means to earn profits while ensuring the investment is making positive impacts on the environment, society, and governance. Using React, REST API, and our developed algorithms, Stock Stalker offers investors both access to real-time financial data including the ESG rating of the stocks, and a platform that illustrates the ESG properties of the store without providing those hard-to-understand technical details. After using our product, the investor can have a deeper understanding of ESG investments without the need to learn about professional finance knowledge. As a result, these investors are now able to make accurate ESG investments based on their interests. ## 🗝️ Key Product Features ·Allow users to search and save the selected stocks based on the input of stock symbols. ·Access to real-time financial data such as the Earning Per Share (EPS) and the current stock price trend. ·Provide numerical ESG ratings in different aspects with our developed algorithms. ·Illustrate ESG properties of the stocks through an easy-to-understand recommendations system. ## ⚙️ Tech Stack ·The prototype was designed with Figma while the front end was built on React ·We used Tailwind CSS, framer-motion and various libraries to decorate our web page ·The backend data was stored on JSON-Server, cors and axios ·Notifications are sent out using Twilio ·Functionalities were built with tradingviewwidget ## 🔥 What we're proud of Even though we faced difficulties in the back-end implementation, we still figured out the techniques that are required for our site via collaboration. Another accomplishment is that our team members not only learned the programming techniques but also, we learned so much finance knowledge during the hack. Additionally, given that we were primarily a front-end team with no experience in dealing with the back-end, we are incredibly proud that we were able to take up the new concepts in very little time (big thanks to the mentors as well!). ## 🔭 What's Next for Stock Stalker Looking to the future for Stock Stalker, we intend on implementing functions that connect our ESG evaluation site to the actual stock-buying organizations. Further implementation includes an additional page that provides the estimated price of the stock selected from the site with the buying options from different stock buying platforms so that the investors can compare and purchase the stock in a more convenient way. ## 🌐 Best Domain Name from Domain.com As a part of our project, we registered stockstalkertech using Domain.com! You can also access it [here](https://www.stockstalker.tech/).
## Inspiration In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions. Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well. Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity. ## What it does This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding. ## How we built it Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy. ## Challenges we ran into The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal. ## Accomplishments that we're proud of We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions. We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours. ## What we learned We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly. ## What's next for MoneyMoves As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
## Inspiration We're all told that stocks are a good way to diversify our investments, but taking the leap into trading stocks is daunting. How do I open a brokerage account? What stocks should I invest in? How can one track their investments? We learned that we were not alone in our apprehensions, and that this problem is even worse in other countries. For example, in Indonesia (Scott's home country), only 0.3% of the population invests in the stock market. A lack of active retail investor community in the domestic stock market is very problematic. Investment in the stock markets is one of the most important factors that contribute to the economic growth of a country. That is the problem we set out to address. In addition, the ability to invest one's savings can help people and families around the world grow their wealth -- we decided to create a product that makes it easy for those people to make informed, strategic investment decisions, wrapped up in a friendly, conversational interface. ## What It Does PocketAnalyst is a Facebook messenger and Telegram chatbot that puts the brain of a financial analyst into your pockets, a buddy to help you navigate the investment world with the tap of your keyboard. Considering that two billion people around the world are unbanked, yet many of them have access to cell/smart phones, we see this as a big opportunity to push towards shaping the world into a more egalitarian future. **Key features:** * A bespoke investment strategy based on how much risk users opt to take on, based on a short onboarding questionnaire, powered by several AI models and data from Goldman Sachs and Blackrock. * In-chat brokerage account registration process powered DocuSign's API. * Stock purchase recommendations based on AI-powered technical analysis, sentiment analysis, and fundamental analysis based on data from Goldman Sachs' API, GIR data set, and IEXFinance. * Pro-active warning against the purchase of a high-risk and high-beta assets for investors with low risk-tolerance powered by BlackRock's API. * Beautiful, customized stock status updates, sent straight to users through your messaging platform of choice. * Well-designed data visualizations for users' stock portfolios. * In-message trade execution using your brokerage account (proof-of-concept for now, obviously) ## How We Built it We used multiple LSTM neural networks to conduct both technical analysis on features of stocks and sentiment analysis on news related to particular companies We used Goldman Sachs' GIR dataset and the Marquee API to conduct fundamental analysis. In addition, we used some of their data in verifying another one of our machine learning models. Goldman Sachs' data also proved invaluable for the creation of customized stock status "cards", sent through messenger. We used Google Cloud Platform extensively. DialogFlow powered our user-friendly, conversational chatbot. We also utilized GCP's computer engine to help train some of our deep learning models. Various other features, such as the app engine and serverless cloud functions were used for experimentation and testing. We also integrated with Blackrock's APIs, primarily for analyzing users' portfolios and calculating the risk score. We used DocuSign to assist with the paperwork related to brokerage account registration. ## Future Viability We see a clear path towards making PocketAnalyst a sustainable product that makes a real difference in its users' lives. We see our product as one that will work well in partnership with other businesses, especially brokerage firms, similar to what CreditKarma does with credit card companies. We believe that giving consumers access to a free chatbot to help them invest will make their investment experiences easier, while also freeing up time in financial advisors' days. ## Challenges We Ran Into Picking the correct parameters/hyperparameters and discerning how our machine learning algorithms will make recommendations in different cases. Finding the best way to onboard new users and provide a fully-featured experience entirely through conversation with a chatbot. Figuring out how to get this done, despite us not having access to a consistent internet connection (still love ya tho Cal :D). Still, this hampered our progress on a more-ambitious IOT (w/ google assistant) stretch goal. Oh, well :) ## Accomplishments That We Are Proud Of We are proud of our decision in combining various Machine Learning techniques in combination with Goldman Sachs' Marquee API (and their global investment research dataset) to create a product that can provide real benefit to people. We're proud of what we created over the past thirty-six hours, and we're proud of everything we learned along the way! ## What We Learned We learned how to incorporate already existing Machine Learning strategies and combine them to improve our collective accuracy in making predictions for stocks. We learned a ton about the different ways that one can analyze stocks, and we had a great time slotting together all of the different APIs, libraries, and other technologies that we used to make this project a reality. ## What's Next for PocketAnalyst This isn't the last you've heard from us! We aim to better fine-tune our stock recommendation algorithm. We believe that are other parameters that were not yet accounted for that can better improve the accuracy of our recommendations; Down the line, we hope to be able to partner with finance professionals to provide more insights that we can incorporate into the algorithm.
partial
## Inspiration Battleship was a popular game and we want to use our skills to translate the idea of the game to code. ## What it does This code will create a 10 x 10 board with 5 hidden enemy submarines. The user will enter both the row and column coordinates. if there is a submarine at that coordinate, a sunk message will appear and the the sign "#" will be at the coordinate. If there is no submarine at that coordinate, the board will show a number that tells the user how far the distance is the submarine located. ## How we built it We mainly used C++ to create the game and used web development languages for the frond-end. ## Challenges we ran into We had a hard time trying to connect the back-end to the front-end. ## What's next for Battlemarines Hopefully figure out ways to connect the website to the game itself.
## Inspiration One of my favorite apps is Spotify. It is by far the app that I use the most common, and often, I find myself and my friends often sharing music with each other. I thought, wouldn't it be great if I could discover music from people around me, while also connecting with complete strangers over music? ## What it does Spotifind is a RESTful web API that allows for it's users to contribute to collaborative, geotagged playlists. I included a simple front end to display it's capabilities. Users are only allowed to contribute to the closest playlist too them, but are able to browse all playlists in the nearby vicinity. ## How I built it I began Spotifind at CalHacks. Every now and then, I'd come back and work on it a little bit, because it seems like something I genuinely would use. I wrote it in C#, utilizing the Spotify API as well as a point-of-interest SQL database. ## Challenges I ran into There were two incredibly difficult aspects of Spotifind. The first was handling authentication. Because I wanted to write collaborative playlists, accessible by all users, I had to create a user account for the application. This required really digging into the authentication flow. Eventually, I realized I could use the refresh token to essentially allow for the application to remain logged in indefinitely. This allowed for programatically adding playlists, songs, and all other necessary actions. The second was deploying the application to Microsoft Azure. Unfortunately, I was not able to overcome this before the deadline. Initially, it was incredibly difficult to upload my databases and codebase to Azure. After hours of work, I was able to deploy it - but the problems didn't end. My SQL calls were not efficient enough and ran too many queries on top of the network latencies. That led to many of the functionalities breaking in the deployed version. ## Accomplishments that I'm proud of I'm proud of writing a functioning backend and front end (on my machine). It was my first full stack project, and even though I wasn't able to complete it to the degree I wanted to, I learned so much. ## What I learned I learned how to write a .NET Web API, and that data structures really do matter (not just for class)! Had I written my data structures a little better, those SQL calls may have been more efficient, and deploying would not have been as difficult as they were. ## What's next for Spotifind I plan on rewriting the models and data structures to allow for deploying onto Azure or AWS. This is something that I could see myself using extensively, and I want to discover more music!
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
losing
## Inspiration Most of us have probably donated to a cause before — be it $1 or $1000. Resultantly, most of us here have probably also had the same doubts: * who is my money really going to? * what is my money providing for them...if it’s providing for them at all? * how much of my money actually goes use by the individuals I’m trying to help? * is my money really making a difference? Carepak was founded to break down those barriers and connect more humans to other humans. We were motivated to create an application that could create a meaningful social impact. By creating a more transparent and personalized platform, we hope that more people can be inspired to donate in more meaningful ways. As an avid donor, CarePak is a long-time dream of Aran’s to make. ## What it does CarePak is a web application that seeks to simplify and personalize the charity donation process. In our original designs, CarePak was a mobile app. We decided to make it into a web app after a bit of deliberation, because we thought that we’d be able to get more coverage and serve more people. Users are given options of packages made up of predetermined items created by charities for various causes, and they may pick and choose which of these items to donate towards at a variety of price levels. Instead of simply donating money to organizations, CarePak's platform appeals to donators since they know exactly what their money is going towards. Once each item in a care package has been purchased, the charity now has a complete package to send to those in need. Through donating, the user will build up a history, which will be used by CarePak to recommend similar packages and charities based on the user's preferences. Users have the option to see popular donation packages in their area, as well as popular packages worldwide. ## How I built it We used React with the Material UI framework, and NodeJS and Express on the backend. The database is SQLite. ## Challenges I ran into We initially planned on using MongoDB but discovered that our database design did not seem to suit MongoDB too well and this led to some lengthy delays. On Saturday evening, we made the decision to switch to a SQLite database to simplify the development process and were able to entirely restructure the backend in a matter of hours. Thanks to carefully discussed designs and good teamwork, we were able to make the switch without any major issues. ## Accomplishments that I'm proud of We made an elegant and simple application with ideas that could be applied in the real world. Both the front-end and back-end were designed to be modular and could easily support some of the enhancements that we had planned for CarePak but were unfortunately unable to implement within the deadline. ## What I learned Have a more careful selection process of tools and languages at the beginning of the hackathon development process, reviewing their suitability in helping build an application that achieves our planned goals. Any extra time we could have spent on the planning process would definitely have been more than saved by not having to make major backend changes near the end of the Hackathon. ## What's next for CarePak * We would love to integrate Machine Learning features from AWS in order to gather data and create improved suggestions and recommendations towards users. * We would like to add a view for charities, as well, so that they may be able to sign up and create care packages for the individuals they serve. Hopefully, we would be able to create a more attractive option for them as well through a simple and streamlined process that brings them closer to donors.
## 💡 Inspiration💡 Our team is saddened by the fact that so many people think that COVID-19 is obsolete when the virus is still very much relevant and impactful to us. We recognize that there are still a lot of people around the world that are quarantining—which can be a very depressing situation to be in. We wanted to create some way for people in quarantine, now or in the future, to help them stay healthy both physically and mentally; and to do so in a fun way! ## ⚙️ What it does ⚙️ We have a full-range of features. Users are welcomed by our virtual avatar, Pompy! Pompy is meant to be a virtual friend for users during quarantine. Users can view Pompy in 3D to see it with them in real-time and interact with Pompy. Users can also view a live recent data map that shows the relevance of COVID-19 even at this time. Users can also take a photo of their food to see the number of calories they eat to stay healthy during quarantine. Users can also escape their reality by entering a different landscape in 3D. Lastly, users can view a roadmap of next steps in their journey to get through their quarantine, and to speak to Pompy. ## 🏗️ How we built it 🏗️ ### 🟣 Echo3D 🟣 We used Echo3D to store the 3D models we render. Each rendering of Pompy in 3D and each landscape is a different animation that our team created in a 3D rendering software, Cinema 4D. We realized that, as the app progresses, we can find difficulty in storing all the 3D models locally. By using Echo3D, we download only the 3D models that we need, thus optimizing memory and smooth runtime. We can see Echo3D being much more useful as the animations that we create increase. ### 🔴 An Augmented Metaverse in Swift 🔴 We used Swift as the main component of our app, and used it to power our Augmented Reality views (ARViewControllers), our photo views (UIPickerControllers), and our speech recognition models (AVFoundation). To bring our 3D models to Augmented Reality, we used ARKit and RealityKit in code to create entities in the 3D space, as well as listeners that allow us to interact with 3D models, like with Pompy. ### ⚫ Data, ML, and Visualizations ⚫ There are two main components of our app that use data in a meaningful way. The first and most important is using data to train ML algorithms that are able to identify a type of food from an image and to predict the number of calories of that food. We used OpenCV and TensorFlow to create the algorithms, which are called in a Python Flask server. We also used data to show a choropleth map that shows the active COVID-19 cases by region, which helps people in quarantine to see how relevant COVID-19 still is (which it is still very much so)! ## 🚩 Challenges we ran into We wanted a way for users to communicate with Pompy through words and not just tap gestures. We planned to use voice recognition in AssemblyAI to receive the main point of the user and create a response to the user, but found a challenge when dabbling in audio files with the AssemblyAI API in Swift. Instead, we overcame this challenge by using a Swift-native Speech library, namely AVFoundation and AVAudioPlayer, to get responses to the user! ## 🥇 Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for while interacting with it, virtually traveling places, talking with it, and getting through quarantine happily and healthily. ## 📚 What we learned For the last 36 hours, we learned a lot of new things from each other and how to collaborate to make a project. ## ⏳ What's next for ? We can use Pompy to help diagnose the user’s conditions in the future; asking users questions about their symptoms and their inner thoughts which they would otherwise be uncomfortable sharing can be more easily shared with a character like Pompy. While our team has set out for Pompy to be used in a Quarantine situation, we envision many other relevant use cases where Pompy will be able to better support one's companionship in hard times for factors such as anxiety and loneliness. Furthermore, we envisage the Pompy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene, exercise tips and even lifestyle advice, Pompy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery. \*\*we had to use separate github workspaces due to conflicts.
## Inspiration Throughout our high school years, my team and I were heavily interested in volunteering and humanitarian aid. We joined several youth organizations to contribute our efforts, but we were never able to apply our strongest skill: computer science. We decided to do something about this issue to help computer scientists and engineers to be able to apply their skills and make a profound impact on our society. At the same time, non-profit organizations have trouble recruiting computer scientists to build their software mainly due to their low budgets, which makes it difficult for these charities to function and contribute to society. Thus, we decided to create TechConnect, a platform where computer science enthusiasts of any level can take on software development projects listed by the community organizations. While these freelancers can build their portfolio, the charities benefit by receiving tech support with little to no cost. ## What it does Our website helps match computer programming enthusiasts with non-profit organizations. More specifically, the webform accepts requests for technological aid from non-profits and posts them onto the platform. When programmers are interested in undertaking the project, they can directly apply towards the cause and our website helps match the pair together. ## How we built it We built this project using Javascript, HTML/CSS, Bootstrap, and Firebase. We designed our website with HTML/CSS and Boostrap and implemented the databases using Javascript and Firebase. There were two databases in our program where we stored the non-profit organizations' and job applicants' information. ## Challenges we ran into When making this project, one of our struggles was designing a visually appealing and functional website. By using bootstrap and carefully designing the details, we were able to overcome this problem. The other issue we came across was implementing FireBase to store our inputted data. Since we were new to using FireBase, there were many methods and commands that we were not familiar with. However, through experimentation and self-study, as well as some general aid from the mentors, we were able to quickly gain grasp of the tool and implement it in our platform. ## Accomplishments that we're proud of Dennis and I are really proud of learning and implementing Firebase for the first time. It was difficult at first, but there were very helpful mentors who guided us through the way. ## What we learned We learned how to call, store, retrieve values from Firebase, and developed a stronger understanding of the programming languages that we used to make this project. Overall, it was a great learning experience, especially as it was our first hackathon, and we are very proud that we were able to complete the program. ## What's next for TechConnect TechConnect Next time, we hope to use Firebase to collect all kinds of values. The information that we retrieved from our program were all strings, but we hope to be able to gather files, lists, and more. We also want to implement a cryptocurrency API to make the financial transaction process between the freelancers and organizations easier.
winning
## Inspiration Our love for cards and equity ## What it does A costume camera rig captures the cards in play, our OpenCV powered program looks for cards and sends them to our self trained machine learning model, which then displays relevant information and suggests the optimal move for you to win! ## How we built it -Python and Jupiter for the code -OpenCV for image detection -Tensor-flow and MatPlotLib for the machine learning model ## Challenges we ran into * We have never used any of these technologies (asides from Python) before ## Accomplishments that we're proud of -We cant believe it kind of works, its amazing, we are so proud. ## What we learned -Re: How we built it ## What's next for Machine Jack -Improving the ML model, and maybe supporting other games. The foundation is there and extensible!
## Inspiration We wanted to learn more about Machine Learning. We thought of sign language translation after hearing the other theme was connectivity, even though that meant the technology more than the word itself. ## What it does The board interfaces with a webcam, where it takes a picture. The picture is then processed into a gray-scale image, and then centered around the hand. After using the Binary Neural Network (and the weights generated from our machine learning training) the image is judged and then outputs a string saying what the hand sign is. ## How we built it We found a MNIST type sign language data set on Kaggle. Using Pynq's library and CUDA we trained our own neural network on one of our personal laptops. This generated weights to be implemented on PynqZ2 with their Binary Neural Network Overlay. We then used their example scripts as a foundation for our own scripts for taking pictures and processing images. ## Challenges we ran into Some of the challenges we faced were mostly centered around creating the machine learning environment that generated the weights used to judge the images taken by the webcam. A lot of libraries that we had to use (ie Theano) were often outdated, causing us to downgrade/reinstall many times to get the whole thing working. Additionally, setting up the board to interface with the webcam was also an issue. The SD card would often be corrupted and slow down our progress. ## Accomplishments that we're proud of Considering our lack of experience using machine learning libraries, we were proud to be able to setup the machine learning environment that allowed us to train. Additionally, we were able to learn about the generally approach machine learning. ## What we learned We learned how to set up the CUDA library and how to use the basics of the PynqZ2 Board and a few of its overlays. ## What's next for Sign Language Image Translation with PynqZ2 Board Ideally, we would like to have the board read from a video instead of a set of images. This would allow for live and much more accurate image translation, and would also allow us to push the board to its full capabilities.
# yhack JuxtaFeeling is a Flask web application that visualizes the varying emotions between two different people having a conversation through our interactive graphs and probability data. By using the Vokaturi, IBM Watson, and Indicoio APIs, we were able to analyze both written text and audio clips to detect the emotions of two speakers in real-time. Acceptable file formats are .txt and .wav. Note: To differentiate between different speakers in written form, please include two new lines between different speakers in the .txt file. Here is a quick rundown of JuxtaFeeling through our slideshow: <https://docs.google.com/presentation/d/1O_7CY1buPsd4_-QvMMSnkMQa9cbhAgCDZ8kVNx8aKWs/edit?usp=sharing>
losing
## Inspiration Inspired by true events. A night we don't fully remember. Imagine: You're out with your friends. A couple drinks in. You spend too long at the pre, and now the line to the club is super long. You don't get home on time. You forget to drink water so you're super hungover the next day. And even worse, you texted that toxic ex. ## What it does This application monitors your drinking experience by sending reminders, blocking contacts and saving user locations. ## How we built it Swift using XCode. ## Challenges we ran into This was mostly our first time using Swift, so there was a steep learning curve for us. Not all of us had Macbooks as well, so we had to optimize our resources. ## Accomplishments that we're proud of Our app works!! And we were able to deploy it to our phones and use it with functionality. ## What we learned How to use Swift, and build our own app. ## What's next for Blackout Buddy Links to drinking games, and a good night out.
## Inspiration As college students, we understand and empathize with the concerns and the next-morning struggles that come with a wild night of partying and drinking. So we created Glass, a mobile app that brings together some of the most important services needed to ensure a safe and responsible drinking experience. ## What it does Glass has four main features: Shot Tracker, Curfew, Day After, and Ride Home. (1) Shot Tracker: Set how many drinks you are willing to take that night, and throughout the night update how many shots you have taken. The app will notify you if you are close to reaching your drinking limit, so that you don’t get too drunk and do anything stupid! (2) Curfew: Set a time you want to get home by. If you are not home by that time according to our location tracking system, the app will automatically send a message to an appointed friend. (3) Day After: Set a time you want to wake up the next morning. No need to even go looking for your other alarm app hidden somewhere else in your phone. (4) Ride Home: Worried about how you’re going to get home safely after a night of drinking? Our fourth and final feature redirects you to different ride apps so that you have several options at your disposal. They are both safe and widely available. ## How we built it We built the iOS app via Xcode, using the programming language Swift. ## Challenges we ran into We built the app among just the two of us. As we were a smaller sized team, the obvious challenge was productivity and time management. This challenge allowed us to realize the paramount importance of working together as a single unit and staying focused on the vision. By sticking to these fundamentals, we were able to successfully deliver a product that we take full pride in. ## Accomplishments that we're proud of None of us had experience with iOS before, but we picked it up pretty quickly and managed to build an app in a short time. ## What we learned We learned how to use XCode to build an app. ## What's next for Glass - An app for responsible drinking Now that we have developed a minimal viable product at this hackathon, we are excited to release the app to a number of early customers and acquire their valuable feedback to make further improvements on the product. We will continue to update and modify this product to encourage a safe and responsible drinking experience for all.
## Inspiration As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system ## What it does We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning ## How we built it Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle ## Challenges we ran into One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly ## Accomplishments that we're proud of Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight! ## What we learned We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future ## What's next for Anti-Bumper Car - A Collision Avoidance System Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
losing
## Inspiration Positive cases are spreading really quickly on campus, and many students decide against going to dormitories because of ineffectively mask enforcements. We hope to make these places safer so college students can come back to university sooner and enjoy their precious student-life experience more! ## What it does Cozy Koalas allows for the identification of people as well as whether or not they’re currently wearing a mask, using YOLOv5. It would allow dormitories to monitor statistics such as number of people with/without mask throughout the weeks. Another feature of our application is our infrared sensor which detects the temperature of an individual (<https://ieeexplore.ieee.org/document/9530864>) . If that person’s temperature is abnormally high and may have a fever, a notification is sent using Twilio so that they are aware of this and will take action to reduce risk. ## How we built it *Machine Learning*: We use images from a camera feed to detect a) whenever a person comes into the frame, and b) whether that person is either: 1. Not wearing a mask 2. Wearing a mask incorrectly 3. Wearing a mask correctly We use YOLOv5 (You Only Look Once) model, a real-time object detection model based on convolutional neural networks ([https://arxiv.org/pdf/2102.05402.pdf)](https://arxiv.org/pdf/2102.05402.pdf) and incorporated a Python script to help label our data. We ran multiple iterations through YOLO to improve our model and labelling. Initially, we only had 2 labels: mask or no\_mask. Howeverk, this was ultimately problematic as our model was unable to detect when someone wore there mask incorrectly (e.g. doesn’t cover nose). That’s why after multiple iterations, we added another label using a Python script. While that was our main change, our multiple iterations helped us balance our data and tune our hyperparameters, leading to a greater accuracy. This backend is connected to a server and a database via Google Cloud’s Firebase for the moment. *Front End:* The front end fetches its information from the Google Cloud’s Firebase and displays it in an interactive dashboard. The dashboard and its following pages were done using Material UI, a front-end library in React. Furthermore, a number of other libraries or tools were used to help sort the data/ beautify the application, such as lodash, iconify, ant-design, faker, etc. ## Challenges we ran into We struggled on really understanding the YOLO model and convolutional networks before implementing it. We initially tried to implement parts of it without understanding this nor torch as this was our first time working with it. However, to actually improve our model, we really needed to understand the parts to change. We also struggled immensely on connecting the two parts of our projects together. In the front end, we ran into multiple typeerrors that had to do with states and usages of props. ## Accomplishments that we're proud of We’re proud of how we worked as a team and leveraged our different specialities and managed to create a working product together once we stitched different parts of our project together. The computer vision and machine learning modules and libraries are state-of-the-art and very much used in current technologies. Our model is even able to differentiate very corner case situations, such as when one has their hand to cover their face instead of a real mask or when the mask is worn incorrectly. The dashboard also turned out to be simple, clean and elegant and reflected what we initially went for. ## What we learned ML models are easy to use yet hard at the same time. Documentation and APIs are useful, but a large part of the understanding is understanding your data and what changes need to be made to improve your model. Data visualisation and metrics were very helpful for this part! Another great thing was the potential and opportunity which comes from already pre-existing and labeling datasets. In our project, Roboflow and Kaggle proved to be hugely useful and saved us a lot of time. ## What's next for Cozy Koalas A mask recognition system can be used in multiple other fields. For instance, airports, hospitals, quarantine centers, malls, schools and offices could reinforce their mask mandates without having to buy extra hardware. A software that can be used in conjunction with existing camera feeds would simplify this task and the analytics provided could also help those organizations better plan their resources. In the technical aspect, a clear next step would be to incorporate face recognition into our model using FaceNet library (<https://ieeexplore.ieee.org/document/9451684>). This would essentially map the face images it gets from the cameras’ feed to identified individuals inside our deep convolutional network. Whether or not this feature will be used is up to the user’s discretion; however, it would be necessary for our system to automatically send a text message to the right person. Other next steps include increasing accuracy through a more balanced dataset (SMOTE balancing can only do so much...), adding distance measurements, and providing more analytics in the dashboard.
## DEMO WITHOUT PRESENTATION ## **this app would typically be running in a public space** [demo without presentation (judges please watch the demo with the presentation)](https://youtu.be/qNmGr1GJNrE) ## Inspiration We spent **hours** thinking about what to create for our hackathon submission. Every idea that we had already existed. These first hours went by quickly and our hopes of finding an idea that we loved were dwindling. The idea that eventually became **CovidEye** started as an app that would run in the background of your phone and track the type and amount of coughs throughout the day, however we discovered a successful app that already does this. About an hour after this idea was pitched **@Green-Robot-Dev-Studios (Nick)** pitched a variation of this app that would run on a security camera or in the web and track the coughs of people in stores (anonymously). A light bulb immediately lit over all of our heads as this would help prevent covid-19 outbreaks, collect data, and is accessible to everyone (it can run on your laptop as opposed to a security camera). ## What it does **CovidEye** tracks a tally of coughs and face touches live and graphs it for you.**CovidEye** allows you to pass in any video feed to monitor for COVID-19 symptoms within the area covered by the camera. The app monitors the feed for anyone that coughs or touches their face. **\_For demoing purposes, we are using a webcam, but this could easily be replaced with a security camera. Our logic can even handle multiple events by different people simultaneously. \_** ## How we built it We used an AI called PoseNet built by Tensorflow. The data outputted by this AI is passed through through some clever detection logic. Also, this data can be passed on to the government as an indicator of where symptomatic people are going. We used Firebase as the backend to persist the tally count. We created a simple A.P.I. to connect Firebase and our ReactJS frontend. ## Challenges we ran into * We spent about 3 hours connecting the AI count to Firebase and patching it into the react state. * Tweaking the pose detection logic took a lot of trial and error * Deploying a built react app (we had never done that before and had a lot of difficulty resulting in the need to change code within our application) * Optimizing the A.I. garbage collection (chrome would freeze) * Optimizing the graph (Too much for chrome to handle with the local A.I.) ## Accomplishments that we're proud of * **All 3 of us** We are very proud that we thought of and built something that could really make a difference in this time of COVID-19, directly and with statistics. We are also proud that this app is accessible to everyone as many small businesses are not able to afford security cameras. * **@Alex-Walsh (Alex)** I've never touched any form of A.I/M.L. before so this was a massive learning experience for me. I'm also proud to have competed in my first hackathon. * **@Green-Robot-Dev-Studios (Nick)** I'm very proud that we were able to create an A.I. as accurate as it in is the time frame * **@Khalid Filali (Khalid)** I'm proud to have pushed my ReactJS skills to the next level and competed in my first hackathon. ## What we learned * Posenet * ChartJS * A.I. basics * ReactJS Hooks ## What's next for CovidEye -**Refining** : with a more enhanced dataset our accuracy would greatly increase * Solace PubSub, we didn't have enough time but we wanted to create live notifications that would go to multiple people when there is excessive coughing. * Individual Tally's instead of 1 tally for each person (we didn't have enough time) * Accounts (we didn't have enough time)
## Inspiration The ongoing effects of climate change and the theme of nature preservation motivated us to think about how can we promote sustainability on campus. A lot of initiatives have been taken by companies, people and universities to tackle this and promote a sustainable lifestyle, but they haven't been very impactful. They either come as a one-time million dollar investment or a set of guidelines without context or implementation. We came up with a concept of an app that allows students to participate in environmentally sustainable activities in their everyday life and reward them for doing so. We believe that sustainability should not be a one time investment but more of a everyday practice and this is what to aim to achieve with SustainU. ## What it does *Check out the entire documentation on the* [GitHub Repo](https://github.com/kritgrover/htv-sustainu/blob/main/README.md)! SustainU is essentially a mobile-app-based rewarding system that motivates students and even faculty members to make contributions to the environment while benefiting from the app in terms of points. Students can do certain tasks shown on the app and then gain points. The points they receive can be redeemed for discounts in campus shops or for donating money to charities. ## How we built it We built it from scratch using **Figma, JavaScript, HTML** and **CSS**. After a brief session of brainstorming we decided to focus on the theme Nature. From there we started researching and discussing about all the different niches for which we can possibly create an effective solution. Building upon that we came up with the idea for SustainU and went ahead with it because of how feasible yet impactful the concept can actually be. It requires a very minimal start up cost and little to no maintenance. The prototype of the app was built from scratch on Figma. Our team spent hours working on the designs, colors, fonts, and the overall UI experience. Although not yet fully polished, this prototype clearly demonstrates the workflow and UI of the app, while taking all the core concepts of design into consideration. The website is made to briefly describe our app and what it does, while providing a slideshow showing the basic working of the app. This website was built using HTML, CSS and a bit of JavaScript, and is really just for showing the world what SustainU is all about. We tackled with a lot of issues related to the layout and bugs during the development process, all of which were solved using research, discussions and critical thinking. ## Challenges we ran into Most of the challenges we ran into were very specific. For instance, how should we visualize the streak feature in the app, whether we should implement an achievement system or not, and buggy JavaScript code. We solved most of the problems by reasoning and technical knowledge, and came up with logical and effective solutions. Another challenge we ran into was choosing how to build the app itself as everyone in our team had different experiences and preferences but we managed to implement and showcase our project, with a bit of compromise and a lot of hard work. ## Accomplishments that we're proud of It is a thrilling fact that we built a design prototype of an app that would make a difference and that can be showcased to everyone. It is not too much of an achievement but we know how to get things started. We now realize that it is much easier for us to build an app from the ground up and put it into practice. Also, working collaboratively as a whole team and have different small tasks for each person from day to night is a great learning experience. Facing challenges head on with tight deadlines made us think more rationally and everyone on the team has learnt something new. Everyone on the team gained valuable hands-on experience and knowledge on concepts they weren't already familiar with. ## What we learned So far, we learned how to form an idea when given a general topic. We've learnt how to work as a team and utilize each person's skills to the fullest. We've also gained insight about the critical steps required for building an app. Moreover, we've learnt a lot about clean and artistic design. All of the skills we acquired are priceless and are worth cherishing. With regards to the web application, we learnt a lot about the crucial development concepts like typography, laying out the structure using HTML, designing using CSS and adding functionality using JavaScript. ## What's next for SustainU App Ideally, we will launch our app for the students at UofT. If feasible, this can be scaled to the public and can be incorporated in our everyday lives where people will be given a platform and incentive to be more sustainable. We at SustainU wish to launch our app to the public and promote environmental sustainability with users across the GTA. An ambitious idea such as this would require cooperation and partnerships with both the private and public sectors. For example, a proposed idea is to have points gained through frequent use of public transport. In order to keep track of when a user uses a public transit service we'd have to partner up with PRESTO in order to link the transactions with the points system of SustainU. Another proposed idea is to partner up with the Municipal Governments in allowing points to be gained through the use of Bike Share. Furthermore, environmentally sustainable small businesses that wish to increase their brand exposure can partner with SustainU and provide offers and discounts on their products. Through SustainU individuals can finally be rewarded by being green.
losing
## Inspiration Have you ever been lying on the bed and simply cannot fall asleep? Statistics show that listening to soothing music while getting to sleep improves sleeping quality because the music helps to release stress and anxiety accumulated from the day. However, do you really want to get off your cozy little bed to turn off the music, or you wanna keep it playing for the entire night? Neither sounds good, right? ## What it does We designed a sleep helper: SleepyHead! This product would primarily function as a music player, but with a key difference: it would be designed specifically to help users fall asleep. SleepyHead connects with an audio device that plays a selection of soft music. It is connected to three sensors that detect the condition of acceleration, sound, and light in the environment. Once SleepyHead detects the user has fallen asleep, it will tell the audio device to get into sleeping mode so that it wouldn’t disrupt the user's sleep. ## How we built it The following are the main components of our product: Sensors: We implement three sensors in SleepyHead: Accelerator to detect movement of the user Sound detector to detect sound made by the user Timer: Give a 20 mins time loop. The data collected by the sensor will be generated every 20 mins. If no activity is detected, SleepyHead tells the Audio device to enter sleeping mode. If activity is detected, SleepyHead will start another 2o min loop. Microcontroller board: We use Arduino Uno as the processor. It is the motherboard of SleepyHead. It connects all the above-mentioned elements and process algorithms. Audio device: will be connected with SleepyHead by Bluetooth. ## Challenges we ran into We had this idea of the sleepyhead to create a sleep speaker that could play soft music to help people fall asleep, and even detect when they were sleeping deeply to turn off the music automatically. But as when we started working on the project, we realized our team didn't get all the electrical equipment and kits we needed to build it. Unfortunately, some of the supplies we had were too old to use, and some of them weren't working properly. It was frustrating to deal with these obstacles, but we didn't want to give up on my idea. As a novice in connecting real products and C++, we struggled to connect all the wires and jumpers to the circuit board, and figuring out how to use the coding language C++ to control all the kits properly and make sure they functioned well was challenging. However, our crew didn't let the difficulties discourage us. With some help and lots of effort, we eventually overcame all the challenges and made our sleepyhead a reality. Now, it feels amazing to see people using it to improve their sleep and overall health. ## Accomplishments that we're proud of One of the most outstanding aspects of the Sleepyhead is that it’s able to use an accelerometer and a sound detector to detect user activity, and therefore decide if the user has fallen asleep based on the data. Also, coupled with its ability to display the time and activate a soft LED light, it’s a stylish and functional addition to any bedroom. These features, combined with its ability to promote healthy sleep habits, make Sleepyhead a truly outstanding and innovative product. ## What we learned Overall, there are many things that we have learned through this hackathon. First, We learned how to discuss the ideas and thoughts with each member more effectively. In addition, we acknowledged how to put our knowledge into practice, and create this actual product in life. Finally, We understand that good innovation can be really useful and essential through the hackathon. Choosing the right direction can save you tons of time. ## What's next for SleepyHead Sleepyhead's next goals are: Gather more feedback from users. This feedback can help sleepyhead determine which functions are feasible and which are not, and can provide insights on what changes or improvements need to be made in sleepyhead. Conduct research. Conducting research can help our product identify trends and gaps in our future potential market. Iterate and test. Once We have a prototype of the product, it is important to iterate and test it to see how it performs in the real world. Stay up to date with industry trends. It's important for us to keep the innovation on top of industry trends and emerging technologies as this can provide SleepyHead with new ideas and insights to improve the product.
## Inspiration We were inspired to do this project after the first night of our first hackathon when we slept on the floor. The second day we were trying to think about what our project should focus on, but we could not concentrate due to our aching backs, and that was when the idea came to mind. We figured that if people had a simple and quiet way of seeing the quality of their sleep during the period where they are still awake while in bed, then maybe they could readjust the way they lay down to distribute their weight evenly on the mattress to ensure better quality of sleep, all while monitoring a person's heart rate to further gauge how well the person is sleeping at night. ## What it does Our project uses force sensors which would ideally lay beneath the upper surface of a mattress to register the amount of force a person exerts over certain areas on their bed. Then, this information is communicated to the individual using yellow LED lights which not only block out blue light (which disturbs sleep cycles) but also indicates where the person is distributing their weight. Our model uses two force sensors, one near where an average person's upper back would be located and one where an average person's lower back (or upper thighs) would be located. These sensors then light up the LEDs when a certain amount of pressure is applied to them. Furthermore, the yellow LEDs simulate soft ambient light which light up in certain areas which correspond to the force sensors, allowing an individual to see where their body is applying the most force. If the two areas with the LEDs are exact in brightness, then the individual can see that their sleeping posture is optimal as pressure is being applied to each sensor evenly. Beneath the first force sensor, there is a pulse sensor which also aims to read the individual's heart rate. This is to gather more data on the quality of their sleep, as heart rates tend to drop when an individual is at rest, usually between 50-90 bmp. Hence, if an individual is restless while sleeping, the pulse sensor will gather that data and will also reflect it by lighting up one small, red LED. This light is not meant to wake the individual, but rather to provide them with visual information in case they are awake. ## How we built it Our model used two arduino boards to execute the code. Each arduino board had its own force sensor and each had its own set of yellow LED lights. Each arduino also had breadboard which was used to wire the LEDs and force sensors to the arduino. One of the arduino boards, however, included the code for the pulse sensor as well as the red LED. A rectangular plastic container was used as the "bed", with the force sensors and the pulse sensor placed above it. The entire model is powered using a laptop. ## What we learned Much can be done with a microcontroller and a few LEDs and sensors in 3 days ## What's next for Quality Sleep Although this project was for our hackathon, we still thought about many applications for this project. We envision our project inspiring research in sleeping habits to improve people's health. Not much research is done on sleep habits as for other aspects of human health, therefore we hoped to delve into it as best as possible. We looked into the effects of sleep posture on the quality of sleep since bad posture could lead to a myriad of pain-related symptoms. Sleep posture is not something most people think about before they go to sleep, but it is something that we wanted to help people understand in a subtle way: lights. We also chose colors such as yellow and red because they do not emit blue light, which would otherwise be any wave of visible light with wavelengths between 500-700nm. This also prevents interruptions to an individual's sleep cycle, as blue light reduces the release of melatonin which is necessary to regulate sleep cycles. All this information is what our project hopes to inspire people to delve into, as we think sleep habits are the hardest habits to get the hang of.
## Inspiration Decentralizing productivity with flow is the key to your success! Have you ever found yourself in the paradoxical situation where you're too tired to continue working but also too stressed about deadlines to sleep? We've discovered the solution you never knew you needed (and probably still don't): Red Handed. Inspired by the countless souls who've fallen victim to their own procrastination and the relentless pursuit of productivity, we decided to tackle the problem head-on, with a slap. After all, why rely on caffeine or traditional means of staying awake when you can have a machine physically prompt you to stay alert? In the grand tradition of inventing solutions to problems you didn't know existed, Red Handed stands proud, ready to slap the sleepiness right out of you. It's the wake-up call you never asked for but might just appreciate in your most desperate, caffeine-deprived moments. ## What it does Red Handed uses a standard webcam to monitor your facial expressions in real-time. With the power of MediaPipe, an open-source machine learning framework for building multimodal (audio, video, etc.) applied machine learning pipelines, it detects signs of sleepiness or inattentiveness, such as yawning or eye closure. Here's the step-by-step process: **Facial Detection**: The webcam captures live video input, focusing on the user's face. Yawn and Eye Shape Analysis: Using MediaPipe, Red Handed analyzes the shape of the eyes and the mouth. We also process the position of the face for the *optimal* slap. A significant change in these shapes, such as a yawn or the eyes closing, triggers the next step. **Processing**: The analysis is sent to a Qualcomm HDK via a Socket.io socket (Hardware Development Kit), where the decision logic resides. This powerful Android kit processes the input in milliseconds and sends the appropriate Bluetooth signal to the robotic slapper. **Action - The Slap**: The real-time feedback loop ensures that the robotic slapper is in the perfect position to deliver the *optimal* slap. Upon detecting sleepiness, the Qualcomm HDK sends a Bluetooth signal to the robotic slapper, delivering a harsh, safe slap to the user. This act is designed to be more of a punishment than a nudge, a physical reminder to stay awake and focused. **Reward**: If you stay awake for the entirety of the Pomodoro timer, you get free Flow tokens! ## How we built it The Pomodoro app on the Qualcomm HDK communicates to the webcam via a Socket.io socket to start analyzing. MediaPipe builds a facial skeleton on the user's face and sends relevant points back to the Qualcomm HDK via the socket for analysis. The Pomodoro app analyzes the data and sends the y-position of the user's face to the ESP module via Bluetooth which gives data to the Arduino slapper. The Pomodoro app also processes the user's face to detect sleepiness and sends the ESP module a "slap" signal to release the red hand. ## Challenges we ran into It was very difficult to build the data pipelines. The Bluetooth took forever to set up while the eduroam firewall would not allow ngrok to process certain requests. ## What's next for Red-Handed In our next move, Red Handed is leveling up from a cheeky wake-up call to the ultimate sidekick in the battle against the snooze. Picture this: Red Handed evolves into your personal hype machine, ready to transform every yawn into a "heck no, let's go!" moment. With its sights set on global domination, we're not just stopping at keeping you awake; we're on a mission to make grogginess fear its name. Imagine a world where Red Handed sets off a ripple of energy, not just in your home office but across continents, making the mid-afternoon slump a thing of the past. We're talking about a future where staying awake isn't just necessary; it's epic, and Red Handed is leading the charge, one exhilarating slap at a time. ## Linktree <https://linktr.ee/puravg>
losing
## What it does SPOT would be equipped with various sensors, cameras, and LIDAR technology to perform inspections in hazardous environments. A network of SPOT bots will be deployed in a within a 2.5 – 3-mile radius surrounding a particular infrastructure for security and surveillance tasks, patrolling areas and providing real-time video feeds to human operators. It can be used to monitor large facilities, industrial sites, or public events, enhancing security effort. These network of SPOT robots will be used to inspect and collect data/mages for analysis, tracking suspects, and gathering crucial intelligence in high-risk environments thus maintaining situational awareness without putting officers in harm's way. They will be providing real-time video feeds . If it detects any malicious activity, the SPOT will act as the first respondent and deploy non-lethal measures by sending a distress signal to the closest law enforcement officer/authority who’d be able mitigate the situation effectively. Consequently, the other SPOT bots in the network would also be alerted. Its ability to provide real-time situational awareness without putting officers at risk is a significant advantage. ## How we built it * Together.ai : Used llama to enable conversations and consensus among agents * MindsDB : Database is stored in postgres (render). The database is imported to Mindsdb. The sentiment classifier is trained with the help of demo data and the sentiments which are retrieved from every agent allows us to understand the mental state of every bot * Reflex : UI for visualization of statistical measures of the bots -Intel : To train mobilevnet for classifying threats * Intersystems : To Carry on Battery Life forecasting for the agent to enable efficient decisions
## Inspiration' With the rise of IoT devices and the backbone support of the emerging 5G technology, BVLOS drone flights are becoming more readily available. According to CBInsights, Gartner, IBISworld, this US$3.34B market has the potential for growth and innovation. ## What it does **Reconnaissance drone software that utilizes custom object recognition and machine learning to track wanted targets.** It performs close to real-time speed with nearly 100% accuracy and allows a single operator to operate many drones at once. Bundled with a light sleek-designed web interface, it is highly inexpensive to maintain and easy to operate. **There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. Identified targets are tagged and sent to an operator that is operating several drones at a time. This information can then be relayed to the appropriate parties.** ## How I built it There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. This runs on a Python script that then sends the information to a backend server built using NodeJS (coincidentally also running on the Dragonboard for the demo) to do processing and to use Microsoft Azure to identify the potential targets. Operators use a frontend to access this information. ## Challenges I ran into Determining a way to reliably demonstrate this project became a challenge considering the drone is not moving and the GPS is not moving as well during the demonstration. The solution was to feed the program a video feed with simulated moving GPS coordinates so that the system believes it is moving in the air. The training model also required us to devote multiple engineers to spending most of their time training the model over the hackathon. ## Accomplishments that I'm proud of The code flow is adaptable to virtually an infinite number of scenarios with virtually **no hardcoding for the demo** except feeding it the video and GPS coordinates rather than the camera feed and actual GPS coordinates ## What I learned We learned a great amount on computer vision and building/training custom classification models. We used Node.js which is a highly versatile environment and can be configured to relay information very efficiently. Also, we learned a few javascript tricks and some pitfalls to avoid. ## What's next for Recognaissance Improving the classification model using more expansive datasets. Enhancing the software to be able to distinguish several objects at once allowing for more versatility.
## Inspiration Public Safety and Crime Surveillance ## What it does Assists the Police Officers in capturing Real-Time data based on Public Crime Demographics and Computer Vision - keeping in mind **Public Safety** and police protection. The camera triggers as soon as it senses a danger based on sound, vision and local criminal history. Additionally, it keeps an eye on the surroundings to give an edge to Police Officers in crime scenes and collecting evidence. ## How we built it As a Team, we undertook the responsibility of individual tasks - connecting the pieces in the end together. We trained separate Deep Learning Models for **Real-Time Disturbance Audio Detection and Computer Vision to detect Danger Index**. We used **Public Data of Crime Demographics** to provide additional confidence to the decision of both Models. We integrated the application on **Swift** and traversed data through **Firebase** between different platforms. ## Challenges we ran into [1] Unavailability of Preprocessed Labelled Dataset or model. [2] Integrating Python scripts with swift to traverse data in Real-Time between IOS and computer. [3]Coming up with the coolest name of our hack and domain. ## Accomplishments that we're proud of [1] Building a community focused hack to solve a significant problem towards public safety and crime surveillance. [2]Being able to execute and deploy a full-fledged System of programs on an app in a short period of time. ## What we learned We learned to integrate Deep Learning models with real-life applications. We learned IOS development and the use of Firebase Platform. We learned to perform Data Augmentation on public datasets and derive mathematical formulae for calculations that analyzed predictions. We learned to have FUN!!! ## What's next for Copture We plan to proceed with the intended application of the project and improve the existing algorithm to cover a wide variety of use-cases.
partial
## Inspiration We were inspired by a [recent article](https://www.cbc.ca/news/canada/manitoba/manitoba-man-heart-stops-toronto-airport-1.5430605) that we saw on the news, where there was a man who suffered a cardiac arrest while waiting for his plane. With the help of a bystander who was able to administer the AED and the CPR, he was able to make a full recovery. We wanted to build a solution that is able to connect victims of cardiac arrests with bystanders who are willing to help, thereby [increasing their survival rates](https://www.ahajournals.org/doi/10.1161/CIRCOUTCOMES.109.889576) . We truly believe in the goodness and willingness of people to help. ## Problem Space We wanted to be laser-focused in the problem that we are solving - helping victims of cardiac arrests. We did tons of research to validate that this was a problem to begin with, before diving deeper into the solution-ing space. We also found that there are laws protecting those who try to offer help - indemnifying them of liabilities while performing CPR or AED: [Good Samaritan and the Chase Mceachern Act](https://www.toronto.ca/community-people/public-safety-alerts/training-first-aid-courses/). So why not ask everyone to help? ## What it does Hero is a web and app based platform that empowers community members to assist in time sensitive medical emergencies especially cardiac arrests, by providing them a ML optimised route that maximizes the CA victim's chances of survival. We have 2 components - Hero Command and Hero Deploy. 1) **Hero Command** is the interface that the EMS uses. It allows the location of cardiac arrests to be shown on a single map, as well as the nearby first-responders and AED Equipment. We scrapped the Ontario Goverment's AED listing to provide an accurate geo-location of an AED for each area. Hero Command has a **ML Model** working in the background to find out the optimal route that the first-responder should take: should they go straight to the victim and perform CPR, or should they detour and collect the AED before proceeding to the victim (of which will take some time). This is done by training our model on a sample dataset and calculating an estimated survival percentage for each of the two routes. 2) **Hero Deploy** is the mobile application that our community of first-responders use. It will allow them to accept/reject the request, and provide the location and navigation instructions. It will also provide hands-free CPR audio guidance so that the community members can focus on CPR. \* Cue the Staying Alive music by the BeeGees \* ## How we built it With so much passion, hard work and an awesome team. And honestly, youtube tutorials. ## Challenges I ran into We **did not know how** to create an app - all of us were either web devs or data analysts. This meant that we had to watch alot of tutorials and articles to get up to speed. We initially considered abandoning this idea because of the inability to create an app, but we are so happy that we managed to do it together. ## Accomplishments that I'm proud of Our team learnt so much things in the past few days, especially tech stacks and concepts that were super unfamiliar to us. We are glad to have created something that is viable, working, and has the potential to change how the world works and lives. We built 3 things - ML Model, Web Interface and a Mobile Application ## What I learned Hard work takes you far. We also learnt React Native, and how to train and use supervised machine learning models (which we did not have any experience in). We also worked on the business market validation such that the project that we are building is actually solving a real problem. ## What's next for Hero Possibly introducing the idea to Government Services and getting their buy in. We may also explore other use cases that we can use Hero with
## Inspiration This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures. ## What it does Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics. Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database. ## How we built it We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack. ## Challenges we ran into Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected. Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code. ## Accomplishments that we're proud of Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges. ## What we learned We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run. ## What's next for Supermaritan In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
# **MedKnight** #### Professional medical care in seconds, when the seconds matter ## Inspiration Natural disasters often put emergency medical responders (EMTs, paramedics, combat medics, etc.) in positions where they must assume responsibilities beyond the scope of their day-to-day job. Inspired by this reality, we created MedKnight, an AR solution designed to empower first responders. By leveraging cutting-edge computer vision and AR technology, MedKnight bridges the gap in medical expertise, providing first responders with life-saving guidance when every second counts. ## What it does MedKnight helps first responders perform critical, time-sensitive medical procedures on the scene by offering personalized, step-by-step assistance. The system ensures that even "out-of-scope" operations can be executed with greater confidence. MedKnight also integrates safety protocols to warn users if they deviate from the correct procedure and includes a streamlined dashboard that streams the responder’s field of view (FOV) to offsite medical professionals for additional support and oversight. ## How we built it We built MedKnight using a combination of AR and AI technologies to create a seamless, real-time assistant: * **Meta Quest 3**: Provides live video feed from the first responder’s FOV using a Meta SDK within Unity for an integrated environment. * **OpenAI (GPT models)**: Handles real-time response generation, offering dynamic, contextual assistance throughout procedures. * **Dall-E**: Generates visual references and instructions to guide first responders through complex tasks. * **Deepgram**: Enables speech-to-text and text-to-speech conversion, creating an emotional and human-like interaction with the user during critical moments. * **Fetch.ai**: Manages our system with LLM-based agents, facilitating task automation and improving system performance through iterative feedback. * **Flask (Python)**: Manages the backend, connecting all systems with a custom-built API. * **SingleStore**: Powers our database for efficient and scalable data storage. ## SingleStore We used SingleStore as our database solution for efficient storage and retrieval of critical information. It allowed us to store chat logs between the user and the assistant, as well as performance logs that analyzed the user’s actions and determined whether they were about to deviate from the medical procedure. This data was then used to render the medical dashboard, providing real-time insights, and for internal API logic to ensure smooth interactions within our system. ## Fetch.ai Fetch.ai provided the framework that powered the agents driving our entire system design. With Fetch.ai, we developed an agent capable of dynamically responding to any situation the user presented. Their technology allowed us to easily integrate robust endpoints and REST APIs for seamless server interaction. One of the most valuable aspects of Fetch.ai was its ability to let us create and test performance-driven agents. We built two types of agents: one that automatically followed the entire procedure and another that responded based on manual input from the user. The flexibility of Fetch.ai’s framework enabled us to continuously refine and improve our agents with ease. ## Deepgram Deepgram gave us powerful, easy-to-use functionality for both text-to-speech and speech-to-text conversion. Their API was extremely user-friendly, and we were even able to integrate the speech-to-text feature directly into our Unity application. It was a smooth and efficient experience, allowing us to incorporate new, cutting-edge speech technologies that enhanced user interaction and made the process more intuitive. ## Challenges we ran into One major challenge was the limitation on accessing AR video streams from Meta devices due to privacy restrictions. To work around this, we used an external phone camera attached to the headset to capture the field of view. We also encountered microphone rendering issues, where data could be picked up in sandbox modes but not in the actual Virtual Development Environment, leading us to scale back our Meta integration. Additionally, managing REST API endpoints within Fetch.ai posed difficulties that we overcame through testing, and configuring SingleStore's firewall settings was tricky but eventually resolved. Despite these obstacles, we showcased our solutions as proof of concept. ## Accomplishments that we're proud of We’re proud of integrating multiple technologies into a cohesive solution that can genuinely assist first responders in life-or-death situations. Our use of cutting-edge AR, AI, and speech technologies allows MedKnight to provide real-time support while maintaining accuracy and safety. Successfully creating a prototype despite the hardware and API challenges was a significant achievement for the team, and was a grind till the last minute. We are also proud of developing an AR product as our team has never worked with AR/VR. ## What we learned Throughout this project, we learned how to efficiently combine multiple AI and AR technologies into a single, scalable solution. We also gained valuable insights into handling privacy restrictions and hardware limitations. Additionally, we learned about the importance of testing and refining agent-based systems using Fetch.ai to create robust and responsive automation. Our greatest learning take away however was how to manage such a robust backend with a lot of internal API calls. ## What's next for MedKnight Our next step is to expand MedKnight’s VR environment to include detailed 3D renderings of procedures, allowing users to actively visualize each step. We also plan to extend MedKnight’s capabilities to cover more medical applications and eventually explore other domains, such as cooking or automotive repair, where real-time procedural guidance can be similarly impactful.
winning
## Inspiration Have you or a family/friend ever been in an automotive accident? Regardless of fault, it's a bad time for everyone involved and we would all like to have that incident behind us and over with. However, then comes dealing with insurance and getting the damages accessed and that takes a while. whether it be waiting for calls or emails and thus the unfortunate event continues to linger in your lives. As someone who has experienced the trauma of a car accident firsthand, and had a lengthy insurance process drag on, I wanted to create something to directly address a lived problem. ## What it does autoassist enables users to upload a photo, the photo is stored in CockroachDB and submitted to Google Cloud Vision to be processed. We trained our own machine learning models to detect surface flaws (dents) in images of cars. ## How we built it We used React to scaffold the frontend of this application. The backend API which connects Cockroach DB and Google Cloud Vision AutoML was written with Node/Express. Our ML/AI capabilities leverage a model we trained ourselves. ## Challenges we ran into Deployment was time-consuming due to the restriction of only one Google Cloud App Engine being allowed within a particular Project. We solved this problem by deploying our front end through AWS. ## Accomplishments that we're proud of We completed and deployed our MVP from end to end! ## What we learned We would not recommend learning React within a day, but it's quite heartening how quickly we were able to pick up new frameworks and technologies! We were super impressed by the accessibility of GCP AI platforms, though we learned the importance of a pre-labelled training dataset, as we spent several hours labelling training images by hand. ## What's next for autoassist We plan to continue working on this project, features we hope to add in the coming weeks are user authentication, complex (multi-source) damage detection and auto-connection to an insurance agent through text or social channels.
## Inspiration As students, we realized that insurance is a topic that is often seen as unapproachable and difficult to learn about. Getting insurance and finding the right one for a certain situation can seem scary and hard to understand, so we wanted to build a platform where a user could punch in appropriate information and find the most suitable information for them. ## What it does **1.** Glass Wings creates an equal platform where anyone can share their information about what kind of insurance they have bought or encountered based on the environmental factors of their property **2.** Glass Wings can predict the type of insurance and cost of it a user can expect based on the property they are trying to buy. **3.** Glass Wings uses real-time data from actual users, raising awareness about insurance for individuals while simultaneously calculating insurance quickly and easily. ## How we built it We built this platform with Python Django, then utilised AWS in order to train our model to predict the right insurance based on our crowd-sourced data. Not only is this trustworthy because it is based on real-time user verified data, but an individual can get a sense of how much everyone else is paying so that they don't feel they are being ripped off by a company. ## Challenges we ran into AWS's SageMaker and ML is not an easy topic to learn overnight. Using new technologies and a new concept was a huge learning curve, which made it a challenge for us to build the product we envisioned. ## Accomplishments that we're proud of We are tackling real-life issues. Environment is a hot topic right now because more and more people are becoming aware about climate change and the circumstances we are living in, and I believe that we are hopping on the right trends and tackling the appropriate issues. ## What we learned The team learned a lot about insurance. Especially as students in pure tech, we weren't too aware about the finance and insurance industry. We realized that these are real-life problems that everyone faces (we will too eventually!) so we understood that this is a problem that everyone should be more aware about. Not only this, we got to learn a good amount of new technologies such as Django and also ML techniques with AWS. ## What's next for Glass Wings Improve our ML model. Although we did train our set with some mock data, we would love to crowd source more data for more accurate and interesting information.
## How to use First, you need an OpenAI account for a unique API key to plug into the openai.api\_key field in the generate\_transcript.py file. You'll also need to authenticate the text-to-speech API with a .json key from Google Cloud. Then, run the following code in the terminal: ``` python3 generate_transcript.py cd newscast npm start ``` You'll be able to use Newscast in your browser at <http://localhost:3000/>. Just log in with your Gmail account and you're good to go! ## Inspiration Newsletters are an underpreciated medium and the experience of accessing them each morning could be made much more convenient if they didn't have to be clicked through one by one. Furthermore, with all the craze around AI, why not have an artificial companion deliver these morning updates to us? ## What it does Newscast aggregates all newsletters a Gmail user has received during the day and narrates the most salient points from each one using personable AI-generated summaries powered by OpenAI and deployed with React and MUI. ## How we built it Fetching mail from Gmail API -> Generating transcripts in OpenAI -> Converting text to speech via Google Cloud -> Running on MUI frontend ## Challenges we ran into Gmail API was surprisingly trickly to operate with; it took a long time to bring the email strings to the form where OpenAi wouldn't struggle with them too much. ## Accomplishments that we're proud of Building a full-stack app that we could see ourselves using! Successfully tackling a front-end solution on React after spending most of our time doing backend and algos in school. ## What we learned Integrating APIs with one another, building a workable frontend solution in React and MUI. ## What's next for Newscast Generating narratives grouped by publication/day/genre. Adding more UI features, e.g. cards pertaining to indidividual newspapers. Building a proper backend (Flask?) to support users and e.g. saving transcripts.
losing
## Inspiration Everyday we spend hours filling out our Google Calendars to stay organized with our events. How can we get the most out of our effort planning ahead? While Spotify already recommends music by our individual tastes, how can we utilize artificial intelligence and software integration to get music that fits our daily routine? ## What it does By integrating Google Calendar, Spotify, and Wolfram technologies, we can create a simple but powerful web application that parses your events and generates playlists specifically for your events. Going on a date? Romance playlist. Lot's of studying? Focus playlist. Time for bed? Sleep playlist. ## How we built it After identifying a problem and working on a solution, we worked on designing our site in Figma. This gave us an idea of what our final product would look like. We then integrated Google Calendar and Spotify APIs into our application. This way, we can get information about a user's upcoming events, process them, and suggest Spotify playlists that meet their mood. We implemented Wolfram's Cloud API to understand the user's GCal events and route to related spotify playlists for each event accordingly. Finally, to make sure our app was robust, we tested various events that a typical student might list on their calendar. We added some styling so that the final product looks clean and is easy to use. ## Challenges we ran into API's ## Accomplishments that we're proud of Integrating Google Calendar, Spotify, and Wolfram. ## What we learned Integration of various API's ## What's next for music.me As of now, music.me is only a web application. Going forward, we want to offer this product as an app on phones, watches, and tablets as well. We want users to be able to control their own music as well. If they want to listen to a custom playlist every time they have a specific event, they should be able to "link" an event with a playlist.
## Inspiration ☁️: Our initial inspiration came from the infamous desktop goose “virus”. The premise of the app revolves around a virtual goose causing mayhem on the user’s computer screen, annoying them as much as possible. We thought it was really interesting how the goose was able to take control of the user’s peripheral inputs, and decided to base our project around a similar concept, but to be used for goodwill. Of course, the idea of creating a goose app was certainly enthralling as well. Unfortunately, our designer mistook a goose for a duck, so some plans had to be changed (probably for the better). Currently, one of the most common cybercrimes is phishing (social engineering attack to steal user data). Certain groups, especially children, are often vulnerable to these attacks, so we decided to create a duck companion, one that will protect the user from malicious phishing content. ## What it does💁‍♂️: "Detective Duck" is a real-time phishing detection application that prevents children from clicking misleading links on the web. It uses a combination of machine learning, cloud technologies, and various python libraries to determine fraudulent links, and where they are located on a page. ## Challenges we ran into 😳: * Examining and weighing bias during the preprocessing step of the data * Web scraping; difficulties in extracting html template links, search engine URL, and finding the exact coordinates of these links * Working with new libraries such as pyautogui, mouse events, etc ## Accomplishments that we're proud of 💪: * Successfully transferred ML model on Jupyter notebook to Google Cloud Platform (creating bucket, deploying functions, maintaining servers) * Coding a responsive GUI (duck figure) based on mouse and keyboard detection from scratch, using OOP * Working effectively as a team with a project that demanded a variety of sections (UI / UX Design, Building ML model, Developing python scripts) ## What we learned 🧠: * Implementing and deploying ML model on the cloud ​ * Working with new python tools, like selenium, to extract detailed web page information ## What's next for Detective Duck - Phishing Detection Application 💼: * Finetuning machine learning model * Adding more features to our GUI * Making our coordinate detection more accurate
## Inspiration: We're trying to get involved in the AI chat-bot craze and pull together cool pieces of technology -> including Google Cloud for our backend, Microsoft Cognitive Services and Facebook Messenger API ## What it does: Have a look - message Black Box on Facebook and find out! ## How we built it: SO MUCH PYTHON ## Challenges we ran into: State machines (i.e. mapping out the whole user flow and making it as seamless as possible) and NLP training ## Accomplishments that we're proud of: Working NLP, Many API integrations including Eventful and Zapato ## What we learned ## What's next for BlackBox: Integration with google calendar - and movement towards a more general interactive calendar application. Its an assistant that will actively engage with you to try and get your tasks/events/other parts of your life managed. This has a lot of potential - but for the sake of the hackathon, we thought we'd try do it on a topic that's more fun (and of course, I'm sure quite a few us can benefit from it's advice :) )
partial
## Inspiration Our team really wanted to create a new way to maximize productivity without the interference of modern technology. we often find ourselves reaching for our phones to scroll through social media for just a "5 minute break" which quickly turns into a 2 hours procrastination session. On top of that, we wanted the motivation to be delivered in a sarcastic/funny way. Thus, we developed a task manager app that bullies you into working. ## What it does The app allows you to create a to-do list of tasks that you can complete at any time. Once you decide to start a task, distracting yourself with other applications is met with reinforcement to get you back to work. The reinforcement is done through text and sound based notifications. Not everyone is motivated in the same way, thus the intensity of the reinforcement can be calibrated to the user's personal needs. The levels include: Encouragement, Passive-Aggression and Bullying. ## How we built it We built the our project as a mobile app using Swift and Apple's SwiftUI and UserNotification frameworks. Development was done via Xcode. The app is optimized for IOS 16. ## Challenges we ran into Learning how to code in Swift. Our team did not have a lot of experience in mobile IOS development. Since we were only familiar with the basics, we wanted to include more advanced features that would force us to integrate new modules and frameworks. ## Accomplishments that we're proud of Having a product we are proud enough to demo. This is the first time anyone in our team is demoing. We spent extra time polishing the design and including animations. We wanted to deliver an App that felt like a complete product, and not just a hack, even if the scope was not very large. ## What we learned We learned front end in Swift (SwiftUI) including how to make animations. A lot about data transfer and persistence in IOS applications. And the entire development cycle of building a complete and kick-ass Application. ## What's next for TaskBully? * Incorporate a scheduling/deadline feature to plan when to complete tasks. * Include an achievement system based around successfully completing tasks. * Implement even more custom sounds for different intensity levels. * Add a social feature to share success with friends. **A message from TaskBully:** Here at TaskBully, our vast team of 2 employees is deeply committed to the goal of replacing bullying with motivation. We are actively looking for sponsorships, investments, and growth opportunities until we can eventually eradicate procrastination.
# Slacker created by Albert Lai, Hady Ibrahim, and Varun Kothandaraman github : *[Slacker Github](https://github.com/albertlai431/slacker-chore)* ## Inspiration Among shared housing, chores are a major hassle for most people to deal with organizing to ensure everyone is doing their fair share of the work. In most cases, without direct instruction, most people simply forget about their slice of work they need to complete. ## What it does Slacker is a web-app that allows users to join a group that contains multiple members of their household and through an overall bigger list of items - tasks get automatically assigned to each member in the group. Each member in the group has a couple of task view points with the main pages being the user’s own personal list, the total group list, each group member’s activity, and settings. The user’s personal list of chores constantly refreshes over each week through one-time and repeating chores for each task. WIth forgetting/overdue chores appearing at the top of the screen on every group member’s personal page for quicker completion. ## How we built it Slacker was built using a combination of React and Chakra UI through github source control. Additionally, we have created mockups of both the desktop pages and the mobile app we were planning on creating. To find pictures of the mockups kindly check out the images we have attached to this devpost for the items that we have created so far. ## Challenges we ran into Originally, our plan was to create an ios/android app through react native and create our fleshed out figma app mockups. The full idea simply had too many features and details to work as both: * Create the mobile application * Create the full application, with all the features we brainstormed The first challenge that we ran into was the mockup and design of the application. UI/UX design caused us a lot of grief as we found it difficult to create some design that we felt both looked good and were easy to understand in terms of functionality. The second challenge that we faced was the google authentication feature we created for logging into the website. The main issue was that the implementation of the feature created a lot of issues and bugs that delayed our total work time by a considerable amount of time. Additionally with the time constraint, we were able to create a React web application that has some basic functionality as a prototype for our original idea. ## Accomplishments that we're proud of We are happy with the web application that we have created so far in our prototype with the given time so far: We have implemented: * Finished the landing page * Finished the google authentication * Home screen * Create tasks that will be automatically assigned to users on a recurring basis * Create invite and join group * Labels slacker member with least tasks * Donut graphs for indication of task completion every week * The ability to see every task for each day * The ability to sign out of the webpage * and even more! ## What we learned As a group, since for the majority of us it was our first hackathon, we put more emphasis and time on brainstorming an idea instead of just sitting down and starting to code our project up. We definitely learned that coming into the hackathon with some preconceived notions of what we individually wanted to code would have saved us around more than half a day in time. We also were surprised to learn how useful figma is as a tool for UI/UX design for web development. The ability to copy-paste CSS code for each element of the webpage was instrumental in our ability to create a working prototype faster. ## What's next for Slacker For Slacker, the next steps are to: * Finish the web application with all of the features * Create and polish the full web application, with all the visual features we brainstormed * Finish the mobile application with all of the same features as the web application we aim to complete
## Inspiration While deciding to study, work, or do a task on your laptop is easy, actually focusing on the task for extended periods of time can be difficult. Especially in today's age with more studies coming out on the shrinkage of attention spans, we decided to actually do something about it. ## What it does Our project is a productivity app designed to force their users to focus. First users enter a task to work on, say "linear algebra." If given permission, our app will proctor your screen and if it sees there is no "linear algebra" on the screen it will beep and pop up with an alert, forcing the user to regain their attention to their task. Users also have the option to have their gaze recorded, and if they are looking away for more than 5 seconds, say on their phone or talking to their friend, the app will again beep and force to user to look at the screen. ## How we built it We built this 100% in Swift, SwiftUI, and in Xcode. There are no API's, no data being read by an LLM, everything is local and only on the users machine, no privacy concerns to be worried about. We used Apple's Vision framework to grant access to the user's camera and screen and recognize everything going on. ## Challenges we ran into Some challenges we ran into was using a completely new language, UI framework, and IDE. Configuring the camera to be granted permission honestly took us too long and set us back a bit. The gaze tracker was a bit sensitive and would beep at anything, so calibrating that to our needs was a decent time chunk. Also, sometimes merging our git branches and resolving conflicts took too long. However, there were no breaking errors we ran into and our project is a finished, working product. ## Accomplishments that we're proud of We are proud of the fact we completely met our feature goals during this hackathon. Every hour we would make checklist items, and then work on those checklist items until they were complete. We are extremely happy looking at our documentation and it being full of checked off items! ## What we learned We learned a lot about Xcode, Swift, SwiftUI and Apple's different libraries. ## What's next for Sapphire What's next for Sapphire is adding more features we think users would like, for example a "Super Annoying Mode" where if the user is off task, an alarm blares and the screen flashes until they return to the task. Also local analytics, average time on task, time off task, how many times you get distracted in a session, total lifetime sessions. These are things the user might like to know about themself. Also more quality of life changes to make Sapphire as best it can be.
partial
## Inspiration Our inspiration for this project was to develop a new approach to how animal shelter networks function, and how the nationwide animal care and shelter systems can be improved to function more efficiently, and cost effectively. In particular, we sought out to develop a program that will help care for animals, find facilities capable of providing the care needed for a particular animal, and eradicate the use of euthanization to quell shelter overpopulation. ## What it does Our program retrieves input data from various shelters, estimates the capacity limit of these shelters, determines which shelters are currently at capacity, or operating above capacity, and optimizes the transfer or animals capable of being moved to new facilities in the cheapest way possible. In particular, the process of optimizing transfers to different facilities based on which facilities are overpopulated was the particular goal of our hack. Our algorithm moves animals from high-population shelters to low-population shelters, while using google maps data to find the optimal routes between any two facilities. Optimization of routes takes into account the cost of traveling to a different facility, and the cost of moving any given number of animals to that facility through cost estimations. Finally, upon determining optimal transfer routes between facilities in our network, our algorithm plots the locations of a map, giving visual representations of how using this optimization scheme will redistribute the animal population over multiple shelters. ## How we built it We built our program using a python infrastructure with json API calls and data manipulation. In particular, we used python to make json API calls to rescue groups and google maps, stored the returned json data, and used python to interpret and analyze this data. Since there are no publicly available datasets containing shelter data, we used rescue groups to generate our own test data sets to run through our program. Our program takes this data, and optimizes how to organize and distribute animals based on this data. ## Challenges we ran into The lack of publicly available data for use was particularly difficult since we needed to generate our own datasets in order to test our system. This problem made us particularly aware of the need to generate a program that can function as a nationwide data acquisition program for shelters to input and share their animal information with neighboring shelters. Since our team didn't have significant experience working on many parts of this project, the entire process was a learning experience. ## Accomplishments that we're proud of We're particularly proud of the time we managed to commit to building this program, given the level of experience we had going into this project as our first hackathon. Our algorithm operates efficiently, using as much information as we were able to incorporate from our limited dataset, and constraints on how we were able to access the data we had compiled. Since our algorithm can find the optimal position to send animals that are at risk due to their location in an overpopulated shelter, our program offers a solution to efficiently redistribute animals at the lowest cost, in order to prevent euthanization of animals, which was our primary goal behind this project. ## What we learned Aside from technical skills learned in the process of working on this project, we all learned how to work as a team on a large software project while under a strict time constraint. This was particularly important since we only began working on the project on the afternoon of the second day of the hackathon. In terms of technical skills, we all learned a lot about using APIs, json calls in python, and learning python much farther in depth than any of us previously had experience in. Additionally, this hackathon was the first time one of our team members had ever coded, and by the end of the project she had written the entire front end of the project and data visualization process. ## What's next for Everybody Lives We had a lot of other ideas that we came up with as a result of this project that we wanted to implement, but did not have the time nor resources available to work on. Specifically, there are numerous areas we would like to improve upon and we conceptualized numerous solutions to issues present in today's shelter management and systems. Overall, we envisioned a software program used by shelters across the country in order to streamline the data acquisition process, and share this data between shelters in order to coordinate animal transfers, and resource sharing to better serve animals at any shelter. The data acquisition process could be improved by developing an easy to use mobile or desktop app that allows to easily input information on new shelter arrivals which immediately is added to a nationally available dataset, which can be used to optimize transfers, resource sharing, and population distribution. Another potential contribution to our program would be to develop a type of transportation and ride-share system that would allow people traveling various distances to transport animals from shelter to shelter such that animals more suited to particular climates and regions would be likely to be adopted in these regions. This feature would be similar to an Uber pool system. Lastly, the most prominent method of improving our program would be to develop a more robust algorithm to run the optimization process, that incorporates more information on every animal, and makes more detailed optimization decisions based on larger input data sets. Additionally, a machine learning mechanism could be implemented in the algorithm in order to learn what situations warrant an animal transfer, from the perspective of the shelter, rather than only basing transfers on data alone. This would make the algorithm grow, learn and become more robust over time.
## Inspiration This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures. ## What it does Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics. Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database. ## How we built it We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack. ## Challenges we ran into Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected. Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code. ## Accomplishments that we're proud of Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges. ## What we learned We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run. ## What's next for Supermaritan In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
## Inspiration Autism is the fastest growing developmental disorder worldwide – preventing 3 million individuals worldwide from reaching their full potential and making the most of their lives. Children with autism often lack crucial communication and social skills, such as recognizing emotions and facial expressions in order to empathize with those around them. The current gold-standard for emotion recognition therapy is applied behavioral analysis (ABA), which uses positive reinforcement techniques such as cartoon flashcards to teach children to recognize different emotions. However, ABA therapy is often a boring process for autistic children, and the cartoonish nature of the flashcards doesn't fully capture the complexity of human emotion communicated through real facial expressions, tone of voice, and body language. ## What it does Our solution is KidsEmote – a fun, interactive mobile app that leverages augmented reality and deep learning to help autistic children understand emotions from facial expressions. Children hold up the phone to another person's face – whether its their parents, siblings, or therapists – and cutting-edge deep learning algorithms identify the face's emotion as one of joy, sorrow, happiness, or surprise. Then, four friendly augmented reality emojis pop up as choices for the child to choose from. Selecting the emoji correctly matching the real-world face creates a shower of stars and apples in AR, and a score counter helps gamify the process to encourage children to keep on playing to get better at recognizing emotions. The interactive nature of KidsEmote helps makes therapy seem like nothing more than play, increasing the rate at which they improve their social abilities. Furthermore, compared to cartoon faces, the real facial expressions that children with autism recognize in KidsEmote are exactly the same as the expressions they'll face in real life – giving them greater security and confidence to engage with others in social contexts. ## How we built it KidsEmote is built on top of iOS in Swift, and all augmented reality objects were generated through ARKit, which provided easy to use physics and object manipulation capabilities. The deep learning emotion classification on the backend was conducted through the Google Cloud Vision API, and 3D models were generated through Blender and also downloaded from Sketchfab and Turbosquid. ## Challenges we ran into Since it was our first time working with ARKit and mobile development, learning the ins and outs of Swift as well as created augmented reality objects was truly and eye-opening experience. Also, since the backend API calls to the Vision API call were asynchronous, we had to carefully plan and track the flow of inputs (i.e. taps) and outputs for our app. Also, finding suitable 3D models for our app also required much work – most online models that we found were quite costly, and as a result we ultimately generated our own 3D face expression emoji models with Blender. ## Accomplishments that we're proud of Building a fully functional app, working with Swift and ARKit for the first time, successfully integrating the Vision API into our mobile backend, and using Blender for the first time! ## What we learned ARKit, Swift, physics for augmented reality, and using 3D modeling software. We also learned how to tailor the user experience of our software specifically to our audience to make it as usable and intuitive as possible. For instance, we focused on minimizing the amount of text and making sure all taps would function as expected inside our app. ## What's next for KidsEmote KidsEmote represents a complete digital paradigm shift in the way autistic children are treated. While much progress has been made in the past 36 hours, KidsEmote opens up so many more ways to equip children with autism with the necessary interpersonal skills to thrive in social situations. For instance, KidsEmote can be easily extended to help autistic children recognize between different emotions from the tone of one's voice, and understand another's mood based on their body gesture. Integration between all these various modalities only yields more avenues for exploration further down the line. In the future, we also plan on incorporating video streaming abilities into KidsEmote to enable autistic children from all over the world to play with each other and meet new friends. This would greatly facilitate social interaction on an unprecedented scale between children with autism since they might not have the opportunity to do so in otherwise in traditional social contexts. Lastly, therapists can also instruct parents to KidsEmote as an at-home tool to track the progress of their children – helping parents become part of the process and truly understand how their kids are improving first-hand.
winning
# QThrive Web-based chatbot to facilitate journalling and self-care. Built for QHacks 2017 @ Queen's University.
## Inspiration For this hackathon, we wanted to build something that could have a positive impact on its users. We've all been to university ourselves, and we understood the toll, stress took on our minds. Demand for mental health services among youth across universities has increased dramatically in recent years. A Ryerson study of 15 universities across Canada show that all but one university increased their budget for mental health services. The average increase has been 35 per cent. A major survey of over 25,000 Ontario university students done by the American college health association found that there was a 50% increase in anxiety, a 47% increase in depression, and an 86 percent increase in substance abuse since 2009. This can be attributed to the increasingly competitive job market that doesn’t guarantee you a job if you have a degree, increasing student debt and housing costs, and a weakening Canadian middle class and economy. It can also be contributed to social media, where youth are becoming increasingly digitally connected to environments like instagram. People on instagram only share the best, the funniest, and most charming aspects of their lives, while leaving the boring beige stuff like the daily grind out of it. This indirectly perpetuates the false narrative that everything you experience in life should be easy, when in fact, life has its ups and downs. ## What it does One good way of dealing with overwhelming emotion is to express yourself. Journaling is an often overlooked but very helpful tool because it can help you manage your anxiety by helping you prioritize your problems, fears, and concerns. It can also help you recognize those triggers and learn better ways to control them. This brings us to our application, which firstly lets users privately journal online. We implemented the IBM watson API to automatically analyze the journal entries. Users can receive automated tonal and personality data which can depict if they’re feeling depressed or anxious. It is also key to note that medical practitioners only have access to the results, and not the journal entries themselves. This is powerful because it takes away a common anxiety felt by patients, who are reluctant to take the first step in healing themselves because they may not feel comfortable sharing personal and intimate details up front. MyndJournal allows users to log on to our site and express themselves freely, exactly like they were writing a journal. The difference being, every entry in a persons journal is sent to IBM Watson's natural language processing tone analyzing API's, which generates a data driven image of the persons mindset. The results of the API are then rendered into a chart to be displayed to medical practitioners. This way, all the users personal details/secrets remain completely confidential and can provide enough data to counsellors to allow them to take action if needed. ## How we built it On back end, all user information is stored in a PostgreSQL users table. Additionally all journal entry information is stored in a results table. This aggregate data can later be used to detect trends in university lifecycles. An EJS viewing template engine is used to render the front end. After user authentication, a journal entry prompt when submitted is sent to the back end to be fed asynchronously into all IBM water language processing API's. The results from which are then stored in the results table, with associated with a user\_id, (one to many relationship). Data is pulled from the database to be serialized and displayed intuitively on the front end. All data is persisted. ## Challenges we ran into Rendering the data into a chart that was both visually appealing and provided clear insights. Storing all API results in the database and creating join tables to pull data out. ## Accomplishments that we're proud of Building a entire web application within 24 hours. Data is persisted in the database! ## What we learned IBM Watson API's ChartJS Difference between the full tech stack and how everything works together ## What's next for MyndJournal A key feature we wanted to add for the web app for it to automatically book appointments with appropriate medical practitioners (like nutritionists or therapists) if the tonal and personality results returned negative. This would streamline the appointment making process and make it easier for people to have access and gain referrals. Another feature we would have liked to add was for universities to be able to access information into what courses or programs are causing the most problems for the most students so that policymakers, counsellors, and people in authoritative positions could make proper decisions and allocate resources accordingly. Funding please
## Inspiration Mental Health is a really common problem amongst humans and university students in general. I myself felt I was dealing with mental issues a couple years back and I found it quite difficult to reach out for help as it would make myself look weak to others. I recovered from my illness when I actually got the courage to ask someone for help. After talking with my peers, I found that this is a common problem amongst young adults. I wanted to create a product which provides you with the needed mental health resources without anyone finding out. Your data is never saved so you don't have to worry about anyone ever finding out. ## What it does This product is called the MHR Finder Bot, but its MHR for short. This bot asks you 10 yes or no questions and then a question about your university, followed by your postal code. After you answer these question(if you feel comfortable), it provides you with some general resources, as well as personalized resources based on your postal code and university. ## How I built it I built the chatbot using deep learning techniques. The bot is trained on a dataset and gives you resources based on your answers to the 12 questions that it asks you. I used special recurrent neural network to allow the bot to give resources based on the user's responses. I started by making the backend portion of the code. I finished that around the 24 hour mark and then I spent the next 12 hours working on the UI and making it simple and user friendly. I chose colors which aided with mental illnesses and have a easter egg in which the scroll wheel cursor is a heart. This is also very antonymous as mental illness is a very sensitive topic. ## Challenges I ran into A challenge I ran into was making the UI side of things look appealing and welcoming. Additionally, close to the deadline of the project, I kept getting an error that one of the variables was used before defined and that as a challenging fix but I figured out the problem in the end. ## Accomplishments that we're proud of I was proud that I was able to get a working Chat Bot done before the deadline considering I was working alone. Additionally, it was my first time using several technologies and libraries in python, so I was quite happy that I was able to use them to effectiveness. Finally, I find it an accomplishment that such a product can help others suffering from mental illnesses. ## What I learned I improved my knowledge of TensorFlow and learned how to use new libraries such as nltk and pickle. Additionally, I was quite pleased that I was able to learn intents when making a chat bot. ## What's next for MHR Finder Bot Currently, I made two chat bots over the 36 hours, one which is used finding mental health resources and the other can be used to simulate a normal conversion similar to ChatGPT. I would like to combine these two so that when trying to find mental health resources, you can
winning
## 💡 Inspiration * Typically, AI applications assist users through a request/response fashion, where users ask a question, and the AI provides an answer. However, I wanted to create an experience where the **user becomes the AI**. * The idea for Speaktree AI was born from a desire to help people improve their public speaking and communication skills. I recognized that many individuals struggle with articulating their thoughts clearly and confidently. * With Speaktree AI, as you speak, the app prompts questions and provides real-time suggestions to help build and improve your responses dynamically, while gathering AI-driven analytics. ## 🚀 What it does * As you speak, Speaktree provides real-time suggestions to help you build and improve your answers. * The app also offers detailed analytics on your speech, helping you refine your communication skills. * Powered by AWS Bedrock, Lambda, and API Gateway, Speaktree AI delivers an interactive and seamless experience on a native iOS Swift app. ## 🛠️ How I built it * **Backend**: Utilized AWS Bedrock for the model and exposed an API endpoint using AWS Lambda and API Gateway. Also integrated OpenAI API. This allows the user to select from a host of models and select the one they like best. * **Frontend**: Built the mobile app natively using Xcode, Swift, and SwiftUI, ensuring a seamless and responsive user interface. Integrated Face ID for authentication. ## 🏃 Challenges I ran into * **Low-Latency Speech to Text**: Ensuring real-time speech processing was a challenge. Instead of using a speech to text API which was too slow, I utilized the SF Speech Recognizer (iOS native), which processes speech on-device quickly and efficiently. This significantly improved the app's ability to provide real-time suggestions without lag. ## 🎉 Accomplishments that I'm proud of * **Fast Model Querying to Support Real-Time Assistance**: One of the biggest challenges was ensuring the app could process speech and provide suggestions in real-time without lag. We solved this by implementing dynamic model selection (**efficient using bedrock**), allowing users to choose the most suitable model based on the contents of their conversation, as different language models excel at different tasks. This ensures quick and relevant feedback tailored to the user's needs. ## 📚 What I learned * Throughout the development of Speaktree AI, I realized that different language models excel at different tasks. This understanding was crucial in designing the app to leverage the strengths of various models effectively. Additionally, I discovered the **power of AWS Bedrock** and ecosystem in enabling efficient utilization of these different models within the app. ## 🔮 What's next for Speaktree AI: Real-time Speech Enhancement * **Audio Suggestions**: Provide real-time suggestions via audio so users don’t have to read while speaking, helping them build confidence during in-person meetings and presentations. * **Gamified Learning**: Introduce a more game-like mode with scores and levels to make learning engaging and fun, encouraging users to improve their speaking skills through challenges and rewards. * **Empathy Detection**: Integrate empathy detection to provide users with more specific insights on their tone and emotional delivery, helping them communicate more effectively and empathetically. * **Progress Tracking**: Persist users' previous conversations and measure their improvement over time, providing detailed progress reports and personalized recommendations for further development.
## Inspiration Did you ever had a hard time learning a new language or wanted to learn what people are talking behind your back when they are speaking another language? Come to Yapple, where you can yap to new friends while having a hard time understanding each other but that is ok. The struggle is always real. ## What it does When you first create an account, you are required to fill in what languages that you are fluent in and are trying to learn. Afterwards, you will be matched with potential Yappers who are fluent in the languages that you are trying to learn and vice versa for them. ## How we built it We used MongoDB as our database to store users' and chat information. With express and NodeJS, we created routes to connect our server to our client. For our beautifully laid out UI, we used ReactJS. ## Challenges we ran into It is widely known that ReactJS has a really high learning curve so it was quite hard to learn since 3 of us were learning it throughout the hackathon. There were plenty of bugs that we needed to solve. Shoutout to our mentors for helping us on that. ## Accomplishments that we're proud of We learned how to use ReactJS within 36 hours, and created a live update feature when users message each other. We created a user interface to display the chat with the live updates along with ways to find new yappers to yap with. ## What we learned We learned a lot about full stack web application. We also learned the importance of dividing the work amongst us to ensure we are as productive as possible during the 36 hours. ## What's next for Yapple Send files such as images over the chat, increasing security between the client and server for users' information, auto translator for a better learning experience, and adding more functions to the profile page (including a bio and avatar you can upload, editing our profile information, and a way to view the statistics we keep track of).
In the distant future, mysterious gatherings unfolded across the globe, where those with an affinity for technology convened. Oddly enough, participants carried only a singular item on their keychains: a small, unassuming stick. This cryptic phenomenon came to be known as "Drive Duels," born from a thirst to unite reality and the virtual realm. Its foundation rested upon its user’s thumb drives, serving as an interface to the strange world on the other side. ## Inspiration We were inspired by the physicality of old video games by using physical hardware directly tied to the game. We wanted to use something that most people have: a thumb drive. The data stored on a small drive isn’t fit for something like an online game, so instead we used them to give each player a unique physical object that can be used to meet others physically and challenge them. Anyone can take their thumb drive and use it to play *Drive Duels*. ## What it does *Drive Duels* is a gotta-catch-em-all-style turn-based combat game utilizing generative AI to create unique creatures and USB media to encourage physical interactions between players as they battle. In *Drive Duels*, every creature (*Byteling*) is represented as a file that can be downloaded (“created”), transferred from disk to disk, or sent to a friend. Players carry USB drives to store their parties, which they may take into battle to fight on their behalf. Players will primarily interact with two pieces of software: The Byteling Manager app is accessible from your own computer at [driveduels.co](https://driveduels.co/), and allows you to both create new Bytelings and review your existing ones—heal them up after battles, track their stats, and more. The Battle Station is accessible at select in-person locations and supports live, turn-based, two-player battles. Players choose to utilize their fielded Byteling’s moves or swap in other Bytelings to strategically outmaneuver opponents and seize victory. ## How we built it Byteling Manager is built with React, TailwindCSS, and Chrome’s experimental filesystem API. A backend service utilizes the ChatGPT and DALL-E APIs to create new creatures with unique descriptions, art, and movesets when requests are sent from the front-end. The Battle Station software is designed with Electron (for ease of direct filesystem access) and React. It utilizes a complex state machine to ensure that battles are always kept moving. ## Challenges we ran into Recognizing the USB’s connection automatically was tough, and many of the methods we wanted to try with our Electron app were simply incompatible. ChatGPT had a tendency to generate overpowered, misleading, or flat-out useless attacks, so we designed a balancing system to reassign attack values based on internal scoring metrics, which helped counteract some of these broken character generations. ## Accomplishments that we're proud of We’re especially proud of the creature creator, which takes inspiration from user-selected keywords to create monsters with their own unique lore, art assets, attacks, and more, using GPT-3 and DALL-E. We are also proud of the battle system, which allows Bytelings from different owners to engage in combat together on a shared system. ## What we learned We learned how to design a website mockup in Figma, and how to use TailwindCSS to style the site to our liking. We also got a better understanding of integrating ChatGPT/DALL-E into our web apps and creating more complex data from it. ## What's next for Drive Duels We’re hoping to flesh out a leveling system, with experience gained in battle and locked attacks.
losing
## Inspiration On a night in January 2018, at least 7 students reported symptoms of being drugged after attending a fraternity party at Stanford [link](https://abc7news.com/2957171/). Although we are only halfway into this academic year, Stanford has already issued seven campus-wide reports about possible aggravated assault/drugging. This is not just a problem within Stanford, drug-facilitated sexual assault (DFSA) is a serious problem among teens and college students nationwide. Our project is deeply motivated by this saddening situation that people around us at Stanford, and the uneasiness caused by the possibility of experiencing such crimes. This project delivers SafeCup, a sensor-embedded smart cup that warns owners if their drink has been tampered with. ## What it does SafeCup is embedded with a simple yet highly sensitive electrical conductivity (EC) sensor which detects concentration of total dissolved solids (TDS). Using an auto-ranging resistance measurement system, designed to measure the conductivity of various liquids, the cup takes several measurements within a certain timeframe and warns the owner by pushing a notification to their phone if it senses a drastic change in the concentration of TDS. This change signifies a change in the content of the drink, which can be caused by the addition of chemicals such as drugs. ## How we built it We used a high surface area electrodes, a set of resistors to build the EC sensor and an Arduino microprocessor to collect the data. The microprocessor then sends the data to a computer, which analyzes the measurements and performs the computation, which then notifies the owner through "pushed", an API that sends push notifications to Android or IOS devices. ## Challenges we ran into The main challenge is getting a stable and accurate EC reading from the home-made sensor. EC is depended on the surface area and the distance between the electrodes, thus we had to designed an electrode where the distance does between the electrod does not vary due to movements. Liquids can have a large range of conductivity, from 0.005 mS/cm to 5000 mS/cm. In order to measure the conductivity at the lower range, we increased the surface area of our electrodes significantly, around 80 cm^2, while typical commercial TDS sensors are less than 0.5 cm^2. In order to measure such a large range of values, we had to design a dynamic auto-ranging system with a range of reference resistors. Another challenge is that we are unable to make our cup look more beautiful, or normal/party-like... This is mainly because of the size of the Arduino UNO microprocessor, which is hard to disguise under the size of a normal party solo cup. This is why after several failed cup designs, we decided to make the cup simple and transparent, and focus on demonstrating the technology instead of the aesthetics. ## Accomplishments that we're proud of We're most proud of the simplicity of the device. The device is made from commonly found items. This also means the device can be very cheap to manufacture. Typical commercial TDS measuring pen can be found for as low as $5 and this device is even simpler than a typical TDS sensor. We are also proud of the auto-ranging resistance measurement. Our cup is able to automatically calibrate to the new drink being poured in, to adjust to its level of resistance (note that different drinks have different chemical compositions and therefore has different resistance). This allows us to make our cup accommodate a wide range of different drinks. We are also proud of finding a simple solution to notify users - developing an app would have take away too much time that we could otherwise put into furthering the cup's hardware design, given a small team of just two first-time hackers. ## What we learned We learned a lot about Arduino development, circuits, and refreshed our knowledge of Ohm's law. ## What's next for SafeCup The prototype we've delivered for this project is definitely not a finished product that is ready to be used. We have not performed any test on whether liquids from the cup are actually consumable since the liquids had been in touch with non-food-grade metal and may undergo electrochemical transformation due to the applied potential on the liquid. Our next step would be to ensure consumer safety. TDS sensor also might not be sensitive enough alone for liquids with already high amount of TDS. Adding other simple complementary sensors can greatly increase the sensitive of the device. Other simple sensors may include dielectric constant sensor, turbidity sensor, simple UV-Vis light absorption sensor, or even making simple electrochemical mesurements. Other sensors such as water level sensor can even be used to keep track of amount of drink you have had throughout the night. We would also use a smaller footprint microprocessor, which can greatly compact the device. In addition, we would like to incorporate wireless features that would eliminate the need to wire to a computer. ## Ethical Implications For "Most Ethically Engaged Hack" We believe that our project could mean a lot to young people facing the risk of DFSA. These people, statistically, mostly consist of college students and teenagers who surround us all the time, and are especially vulnerable to such type of crimes. We have come a long way to show that the idea of using simple TDS sensor for illegal drugging works. With future improvements in its beauty and safety, we believe it could be a viable product that improves safety of many people around us in colleges and parties.
## Inspiration We're students, and that means one of our biggest inspirations (and some of our most frustrating problems) come from a daily ritual - lectures. Some professors are fantastic. But let's face it, many professors could use some constructive criticism when it comes to their presentation skills. Whether it's talking too fast, speaking too *quietly* or simply not paying attention to the real-time concerns of the class, we've all been there. **Enter LectureBuddy.** ## What it does Inspired by lackluster lectures and little to no interfacing time with professors, LectureBuddy allows students to signal their instructors with teaching concerns at the spot while also providing feedback to the instructor about the mood and sentiment of the class. By creating a web-based platform, instructors can create sessions from the familiarity of their smartphone or laptop. Students can then provide live feedback to their instructor by logging in with an appropriate session ID. At the same time, a camera intermittently analyzes the faces of students and provides the instructor with a live average-mood for the class. Students are also given a chat room for the session to discuss material and ask each other questions. At the end of the session, Lexalytics API is used to parse the chat room text and provides the instructor with the average tone of the conversations that took place. Another important use for LectureBuddy is an alternative to tedious USATs or other instructor evaluation forms. Currently, teacher evaluations are completed at the end of terms and students are frankly no longer interested in providing critiques as any change will not benefit them. LectureBuddy’s live feedback and student interactivity provides the instructor with consistent information. This can allow them to adapt their teaching styles and change topics to better suit the needs of the current class. ## How I built it LectureBuddy is a web-based application; most of the developing was done in JavaScript, Node.js, HTML/CSS, etc. The Lexalytics Semantria API was used for parsing the chat room data and Microsoft’s Cognitive Services API for emotions was used to gauge the mood of a class. Other smaller JavaScript libraries were also utilised. ## Challenges I ran into The Lexalytics Semantria API proved to be a challenge to set up. The out-of-the box javascript files came with some errors, and after spending a few hours with mentors troubleshooting, the team finally managed to get the node.js version to work. ## Accomplishments that I'm proud of Two first-time hackers contributed some awesome work to the project! ## What I learned "I learned that json is a javascript object notation... I think" - Hazik "I learned how to work with node.js - I mean I've worked with it before, but I didn't really know what I was doing. Now I sort of know what I'm doing!" - Victoria "I should probably use bootstrap for things" - Haoda "I learned how to install mongoDB in a way that almost works" - Haoda "I learned some stuff about Microsoft" - Edwin ## What's next for Lecture Buddy * Multiple Sessions * Further in-depth analytics from an entire semester's worth of lectures * Pebble / Wearable integration! @Deloitte See our video pitch!
## Inspiration As two members in the group of four are international students studying in the US, we encounter the problem that lots of American foods do not suit our appetite, which have a significant impact on our our homesickness and well-being. In addition to that, college students have limited time to buy grocery and cook, it is crucial to provide a platform where they can find recipes that best fits their taste and their ingredients availability. Our team's project is inspired by this particular long-standing problem of international students in the US. Coming together as individuals who love cooking and taking inspiration from this year theme of CTRL+ALT+CREATE, we decided to build on the idea of Tinder and created Blendr, a Tinder for cooking lovers! ## What it does Our web application implements a recommender algorithm that allows the user to see a curated list of recommended recipes for them where they can "swipe" left or right depending on their interest in the recipe. We hope give users ideas on what to eat in a familiar and entertaining way. Our website allows the user to sign up and log in to enter information about their food preferences and restrictions as well as food materials in hand through a survey, and based on the result of the survey, we recommend a curated list of recipes for them where they can choose between "like" and "skip" depending on their approval of the recipe. This website will not only save users' time on deciding what to cook and how to cook, but also offer them their own personalized meal plan. ## How we built it On the client-side, we utilized Figma to design the UI/UX for our web application. We implemented HTML, CSS, and Javascript, with Bootstrap for responsiveness. We architected our back-end using the Django-Python framework. We used the results from the survey to create a list of key words that will be ran through our Python functions to make a list of recommended recipes in our on meal() class. Ultimately, we scraped through 2 million recipes in the Edemam Recipe API to output recommendations for users. ## Challenges we ran into Originally, we wanted to build a web application that matches people using a food-based recommender system and thereby building a community for people to develop friendship through cooking together. However, due to time constraint and the fact that the majority of our team are first-time hackers , we decided to create a less ambitious but still fully functional version of our intended product, laying the foundation for its future development. For the front end, we were stuck in making a top navigation bar for hours with the logo on the left and other icons on the right, as there are lots of differences among tutorials about navigation bars and the html codes were too confusing to us as beginners. We also had difficulties creating animations to change pages once users click on some buttons, but we finally figured out an easiest way by assigning links to buttons. The Edamam API also presented some problems to us. We wanted to provide the step-by-step instructions for each recipe, but the links that are attributed to the recipes by the API were all broken. Therefore, we could not provide the instructions to make the meals that the user approves. Additionally, the API itself had a tons of consistency in its design that require us to think of efficient algorithm to run our recommendation system through it. ## Accomplishments that we're proud of We are proud of building nice-looking front-end components that are so responsive! In addition to that, our authentication and authorization for users are also successfully implemented. We also built algorithms that can scrape through 2 millions recipes very quickly, and through that we also managed to output relevant recommendations based on users' preference for recipes. ## What we learned HackHarvard was an enriching learning experience for all of us since we got to engage in the product development process from the beginning to deployment. We got our hands-on front-end and back-end development for the first time, learned how to build responsive website, and practiced building servers and adding our scripts into the servers. Additionally, we were able to setup authorization and authentication on our website, in addition to learning new advanced technology like recommender systems. ## What's next for Blendr We plan to use machine learning for our recommender systems to output more accurate recommendations. In addition to that, we also plan to further improve our UI/UX and add new features such as community hub for people with similar interests in cooking and users can also look back at their recipes in their account. Overall, we are excited to improve Blendr beyond HackHarvard
winning
## What is Search and Protect We created a hack that can search through public twitter timeline histories of many people and determine whether they are at a risk of self harm or depression using personality profiling and sentiment analysis. ## How would this be used? Organizations such as the local police or mental health support groups would be able to keep a close eye on those who are not in a good state of mind or having a rough time in life. Often people will express their feelings on social media due to the feeling on semi-anonymity and the fact that they can hide behind a screen, so it is possible a lot of people may be more transparent about heavy issues. ## Technical Implementation To connect our backend to our frontend, we took full advantage of the simplicity and utility of stdlib to create numerous functions that we used at various points to perform simple tasks such as scraping a twitter timeline for texts, sending a direct message to a specific user and one to interact with the Watson sentiment/personality analysis api. In addition, we have a website set up where an administrator would be able to view information gathered. ## The future for Search and Protect The next step would be setting up an automated bot farm that runs this project amongst relevant users. For example, a University mental support group would run it amongst the followers of their official Twitter account. It could also implement intelligent chat AI so that people can continue to talk and ask it for help even when there is nobody available in person.
## Inspiration 1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience. ## What it does Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort. ## How I built it Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development. ## Challenges I ran into Setting up the firebase to connect to the front end react app. ## Accomplishments that I'm proud of Proud of the final look of the app/site with its clean, minimalistic design. ## What I learned The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps. ## What's next for HearMeOut We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
## Inspiration Sexual assault survivors are in tremendously difficult situations after being assaulted, having to sacrifice privacy and anonymity to receive basic medical, legal, and emotional support. And understanding how to proceed with one's life after being assaulted is challenging due to how scattered information on resources for these victims is for different communities, whether the victim is on an American college campus, in a foreign country, or any number of other situations. Instead of building a single solution or organizing one set of resources to help sexual assault victims everywhere, we believe a simple, community-driven solution to this problem lies in Echo. ## What it does Using Blockstack, Echo facilitates anonymized communication among sexual assault victims, legal and medical help, and local authorities to foster a supportive online community for victims. Members of this community can share their stories, advice, and support for each other knowing that they truly own their data and it is anonymous to other users, using Blockstack. Victims may also anonymously report incidents of assault on the platform as they happen, and these reports are shared with local authorities if a particular individual has been reported as an offender on the platform several times by multiple users. This incident data is also used to geographically map where in small communities sexual assault happens, to provide users of the app information on safe walking routes. ## How we built it A crucial part to feeling safe as a sexual harassment survivor stems from the ability to stay anonymous in interactions with others. Our backend is built with this key foundation in mind. We used Blockstack’s Radiks server to create a decentralized application that would keep all user’s data local to that user. By encrypting their information when storing the data, we ensure user privacy and mitigate all risks to sacrificing user data. The user owns their own data. We integrated Radiks into our Node and Express backend server and used this technology to manage our database for our app. On the frontend, we wanted to create an experience that was eager to welcome users to a safe community and to share an abundance of information to empower victims to take action. To do this, we built the frontend from React and Redux, and styling with SASS. We use blockstack’s Radiks API to gather anonymous messages in the Support Room feature. We used Twilio’s message forwarding API to ensure that victims could very easily start anonymous conversations with professionals such as healthcare providers, mental health therapists, lawyers, and other administrators who could empower them. We created an admin dashboard for police officials to supervise communities, equipped with Esri’s maps that plot where the sexual assaults happen so they can patrol areas more often. On the other pages, we aggregate online resources and research into an easy guide to provide victims the ability to take action easily. We used Azure in our backend cloud hosting with Blockstack. ## Challenges we ran into We ran into issues of time, as we had ambitious goals for our multi-functional platform. Generally, we faced the learning curve of using Blockstack’s APIs and integrating that into our application. We also ran into issues with React Router as the Express routes were being overwritten by our frontend routes. ## Accomplishments that we're proud of We had very little experience developing blockchain apps before and this gave us hands-on experience with a use-case we feel is really important. ## What we learned We learned about decentralized data apps and the importance of keeping user data private. We learned about blockchain’s application beyond just cryptocurrency. ## What's next for Echo Our hope is to get feedback from people impacted by sexual assault on how well our app can foster community, and factor this feedback into a next version of the application. We also want to build out shadowbanning, a feature to block abusive content from spammers on the app, using a trust system between users.
winning