anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration In college I have weird 50 minute slots of time in between classes when it wouldn't make sense to walk all the way back to my dorm, but at the same time 50 minutes is a long wait. I would love to be able to meet up with friends who are also free during this time, but don't want to text every single friend asking if they are free. ## What it does Users of Toggle can open up the website whenever they are free and click on the hour time slot they are in. Toggle then provides a list of suggestions for what to do. All the events entered for that time will show up on the right side of the screen. Everyone is welcome to add to the events on Toggle and each school could have it's own version so that we all make the most of our free time on campus, by meeting new people and learning about new communities we might not have run into otherwise. ## How I built it ## Challenges I ran into ## Accomplishments that I'm proud of I learned and built in JavaScript in 36 hours!! ## What I learned 24 arrays was not the way to go - object arrays are a life-saver. ## What's next for Toggle
# Inspiration We came to Stanford expecting a vibrant college atmosphere. Yet walk past a volleyball or basketball court at Stanford mid-Winter quarter, and you’ll probably find it empty. As college students, our lives revolve around two pillars: productivity and play. In an ideal world, we spend intentional parts of our day fully productive–activities dedicated to our fulfillment–and some parts of our day fully immersed in play–activities dedicated solely to our joy. In reality, though, students might party, but how often do they play? Large chunks of their day are spent in their dorm room, caught between these two choices, doing essentially nothing. This doesn’t improve their mental health. Imagine, or rather, remember, when you were last in that spot. Even if you were struck by inspiration to get out and do something fun, who with? You could text your friends, but you don’t know enough people to play 4-on-4 soccer, or if anyone’s interested in joining you for some baking between classes. # A Solution When encountering this problem, frolic can help. Users can: See existing events, sorted by events “containing” most of their friends at the top Join an event, getting access to the names of all members of event (not just their friends) Or, save/bookmark an event for later (no notification sent to others) Access full info of events they’ve joined or saved in the “My Events” tab Additional, nice-to-have features include: Notification if their friend(s) have joined an event in case they’d like to join as well # Challenges & An Important Lesson Not only had none of us had iOS app development experience, but with less than 12 hours to go, we realized with the original environment and language we were working in (Swift and XCode), the learning curve to create the full app was far too steep. Thus, we essentially started anew. We realized the importance of reaching out for guidance from more experienced people early on, whether at a hackathon, academic, or career-setting. /\* Deep down, we know how important times of play are–though, we often never seem to “have time” for it. In reality, this often is correlated with us being caught in a rift between the two poles we mentioned: not being totally productive, nor totally grasping the joy that we should ideally get from some everyday activities. \*/
## Inspiration As University of Waterloo students who are constantly moving in and out of many locations, as well as constantly changing roommates, there are many times when we discovered friction or difficulty in communicating with each other to get stuff done around the house. ## What it does Our platform allows roommates to quickly schedule and assign chores, as well as provide a messageboard for common things. ## How we built it Our solution is built on ruby-on-rails, meant to be a quick simple solution. ## Challenges we ran into The time constraint made it hard to develop all the features we wanted, so we had to reduce scope on many sections and provide a limited feature-set. ## Accomplishments that we're proud of We thought that we did a great job on the design, delivering a modern and clean look. ## What we learned Prioritize features beforehand, and stick to features that would be useful to as many people as possible. So, instead of overloading features that may not be that useful, we should focus on delivering the core features and make them as easy as possible. ## What's next for LiveTogether Finish the features we set out to accomplish, and finish theming the pages that we did not have time to concentrate on. We will be using LiveTogether with our roommates, and are hoping to get some real use out of it!
losing
## Inspiration A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach. #### PillAR is your personal augmented reality pill/medicine tracker. It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing. We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines. ## How we built it We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app. ## Accomplishments that we're proud of This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand. ## What's next for PillAR In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier.
**check out the project demo during the closing ceremony!** <https://youtu.be/TnKxk-GelXg> ## Inspiration On average, half of patients with chronic illnesses like heart disease or asthma don’t take their medication. Reports estimates that poor medication adherence could be costing the country $300 billion in increased medical costs. So why is taking medication so tough? People get confused and people forget. When the pharmacy hands over your medication, it usually comes with a stack of papers, stickers on the pill bottles, and then in addition the pharmacist tells you a bunch of mumble jumble that you won’t remember. <http://www.nbcnews.com/id/20039597/ns/health-health_care/t/millions-skip-meds-dont-take-pills-correctly/#.XE3r2M9KjOQ> ## What it does The solution: How are we going to solve this? With a small scrap of paper. NekoTap helps patients access important drug instructions quickly and when they need it. On the pharmacist’s end, he only needs to go through 4 simple steps to relay the most important information to the patients. 1. Scan the product label to get the drug information. 2. Tap the cap to register the NFC tag. Now the product and pill bottle are connected. 3. Speak into the app to make an audio recording of the important dosage and usage instructions, as well as any other important notes. 4. Set a refill reminder for the patients. This will automatically alert the patient once they need refills, a service that most pharmacies don’t currently provide as it’s usually the patient’s responsibility. On the patient’s end, after they open the app, they will come across 3 simple screens. 1. First, they can listen to the audio recording containing important information from the pharmacist. 2. If they swipe, they can see a copy of the text transcription. Notice how there are easy to access zoom buttons to enlarge the text size. 3. Next, there’s a youtube instructional video on how to use the drug in case the patient need visuals. Lastly, the menu options here allow the patient to call the pharmacy if he has any questions, and also set a reminder for himself to take medication. ## How I built it * Android * Microsoft Azure mobile services * Lottie ## Challenges I ran into * Getting the backend to communicate with the clinician and the patient mobile apps. ## Accomplishments that I'm proud of Translations to make it accessible for everyone! Developing a great UI/UX. ## What I learned * UI/UX design * android development
## Inspiration While talking to Mitt from the CVS booth, he opened my eyes to a problem that I was previously unaware - counterfeits in the pharmaceutical industry. After a good amount of research, I learned that it was possible to make a solution during the hackathon. A friendly interface with a blockchain backend could track drugs immutably, and be able to track the item from factory to the consumer means safer prescription drugs for everyone. ## What it does Using our app, users can scan the item, and use the provided passcode to make sure that item they have is legit. Using just the QR scanner on our app, it is very easy to verify the goods you bought, as well as the location the drugs were manufactured. ## How we built it We started off wanting to ensure immutability for our users; after all, our whole platform is made for users to trust the items they scan. What came to our minds was using blockchain technology, which would allow us to ensure each and every item would remain immutable and publicly verifiable by any party. This way, users would know that the data we present is always true and legitimate. After building the blockchain technology with Node.js, we started working on the actual mobile platform. To create both iOS and Android versions simultaneously, we used AngularJS to create a shared codebase so we could easily adapt the app for both platforms. Although we didn't have any UI/UX experience, we tried to make the app as simple and user-friendly as possible. We incorporated Google Maps API to track and plot the location of where items are scanned to add that to our metadata and added native packages like QR code scanning and generation to make things easier for users to use. Although we weren't able to publish to the app stores, we tested our app using emulators to ensure all functionality worked as intended. ## Challenges we ran into Our first challenge was learning how to build a blockchain ecosystem within a mobile app. Since the technology was somewhat foreign to us, we had to learn the in and outs of what "makes" a blockchain and how to ensure its immutability. After all, trust and security are our number one priorities and without them, our app was meaningless. In the end, we found a way to create this ecosystem and performed numerous unit tests to ensure it was up to industry standards. Another challenge we faced was getting the app to work in both iOS and Android environments. Since each platform had its set of "rules and standards", we had to make sure that our functions worked in both and that no errors were engendered from platform deviations. ## What's next for NativeChain We hope to expand our target audience to secondhand commodities and the food industry. In today's society, markets such as eBay and Alibaba are flooded with counterfeit luxury goods such as clothing and apparel. When customers buy these goods from secondhand retailers on eBay, there's currently no way they can know for certain whether that item is legitimate as they claim; they solely rely on the seller's word. However, we hope to disrupt this and allow customers to immediately view where the item was manufactured and if it truly is from Gucci, rather than a counterfeit market in China. Another industry we hope to expand to is foods. People care about where the food they eat comes from, whether it's kosher and organic and non-GMO. Although the FDA regulates this to a certain extent, this data isn't easily accessible by customers. We want to provide a transparent and easy way to users to view the food they are eating by showing them data like where the honey was produced, where the cows were grown, and when their fruits were picked. Outbreaks such as the Chipotle Ecoli incident can be pinpointed as they can view where the incident started and to warn customers to not eat food coming from that area.
winning
This project was developed with the RBC challenge in mind of developing the Help Desk of the future. ## What inspired us We were inspired by our motivation to improve the world of work. ## Background If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution. ## Try it! <http://www.rbcH.tech> ## What we learned Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes. ## How we built * Node.js for our servers (one server for our webapp, one for BotFront) * React for our front-end * Rasa-based Botfront, which is the REST API we are calling for each user interaction * We wrote our own Botfront database during the last day and night ## Our philosophy for delivery Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP. ## Challenges we faced Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence. ## Our code <https://github.com/ntnco/mchacks/> ## Our training data <https://github.com/lool01/mchack-training-data>
# The Ultimate Water Heater February 2018 ## Authors This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan. ## About Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource. Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future. Some key features we have: * 90% energy efficiency * An average rate of roughly 10 kW/hr of energy consumption * Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure * Clean and easily understood UI for typical household users * Incorporation of the Internet of Things for convenience of use and versatility of application * Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\* * Cheap cost of installation and immediate returns on investment ## Inspiration By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs. So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better. ## How It Works Our project essentially operates in several core parts running simulataneously: * Arduino (101) * Heating Mechanism * Mobile Device Bluetooth User Interface * Servers connecting to the IoT (and servicing via Alexa) Repeat all processes simultaneously The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity. The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;) The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger. The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age. ## Built With: (In no particular order of importance...) * RhoAI * R * Balsamiq * C++ (Arduino 101) * Node.js * Tears * HTML * Alexa API * Swift, Xcode * BLE * Buckets and Water * Java * RXTX (Serial Communication Library) * Mathematica * MatLab (assistance) * Red Bull, Soylent * Tetrix (for support) * Home Depot * Electronics Express * Breadboard, resistors, capacitors, jumper cables * Arduino Digital Temperature Sensor (DS18B20) * Electric Tape, Duct Tape * Funnel, for testing * Excel * Javascript * jQuery * Intense Sleep Deprivation * The wonderful support of the people around us, and TreeHacks as a whole. Thank you all! \*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2> Special thanks to our awesome friends Michelle and Darren for providing moral support in person!
## Inspiration When we were deciding what to build for our hack this time we had plenty of great ideas. We zeroed down on something that people like us would want to use. The hardest problem faced by people like us is managing the assignments, classes and the infamous LeetCode grind. Now it would have been most useful if we could design an app that would finish our homework for us without plagiarising things off of the internet but since we could not come up with that solution(believe me we tried) we did the next best thing. We tried our hands at making the LeetCode grind easier by using machine learning and data analytics. We are pretty sure every engineer has to go through this rite of passage. Since there is no way to circumvent this grind only our goal is to make it less painful and more focused. ## What it does The goal of the project was clear from the onset, minimizing the effort and maximizing the learning thereby making the grind less tedious. We achieved this by using data analytics and machine learning to find the deficiencies in the user knowledge base and recommend questions with an aim to fill the gaps. We also allow the users to understand their data better by allowing the users to make simple queries over our chatbot which utilizes NLP to understand and answer the queries. The overall business logic is hosted on the cloud over the google app engine. ## How we built it The project achieves its goals using 5 major components: 1. The web scrapper to scrap the user data from websites like LeetCode. 2. Data analytics and machine learning to find areas of weakness and processing the question bank to find the next best question in an attempt to maximize learning. 3. Google app engine to host the APIs created in java which connects our front end with the business logic in the backend. 4. Google dialogflow for the chatbot where users can make simple queries to understand their statistics better. Android app client where the user interacts with all these components utilizing the synergy generated by the combination of the aforementioned amazing components. ## Challenges we ran into There were a number of challenges that we ran into:- 1. Procuring the data: We had to build our own web scraper to extract the question bank and the data from the interview prep websites. The security measures employed by the websites didn't make our job any easier. 2. Learning new technology: We wanted to incorporate a chatbox into our app, this was something completely new to a few of us and learning it in a short amount of time to write production-quality code was an uphill battle. 3. Building the multiple components required to make our ambitious project work. 4. Lack of UI/UX expertise. It is a known fact that not many developers are good designers, even though we are proud of the UI that we were able to build but we feel we could have done better with mockups etc. ## Accomplishments that we are proud of 1. Completing the project in the stipulated time. Finishing the app for the demo seemed like an insurmountable task on Saturday night after little to no sleep the previous night. 2. Production quality code: We tried to keep our code as clean as possible by using best programming practices whenever we could so that the code is easier to manage, debug, and understand. ## What we learned 1. Building APIs in Spring Boot 2. Using MongoDB with Spring Boot 3. Configuring MongoDB in Google Cloud Compute 4. Deploying Spring Boot APIs in Google App Engine & basics of GAE 5. Chatbots & building chatbots in DialogFlow 6. Building APIs in NodeJS & linking them with DialogFlow via Fulfillment 7. Scrapping data using Selenium & the common challenges while scrapping large volumes of data 8. Parsing scrapped data & efficiently caching it ## What's next for CodeLearnDo 1. Incorporating leaderboards and a sense of community in the app to encourage learning.
winning
## Inspiration Greenhouses require increased disease control and need to closely monitor their plants to ensure they're healthy. In particular, the project aims to capitalize on the recent cannabis interest. ## What it Does It's a sensor system composed of cameras, temperature and humidity sensors layered with smart analytics that allows the user to tell when plants in his/her greenhouse are diseased. ## How We built it We used the Telus IoT Dev Kit to build the sensor platform along with Twillio to send emergency texts (pending installation of the IoT edge runtime as of 8 am today). Then we used azure to do transfer learning on vggnet to identify diseased plants and identify them to the user. The model is deployed to be used with IoT edge. Moreover, there is a web app that can be used to show that the ## Challenges We Ran Into The data-sets for greenhouse plants are in fairly short supply so we had to use an existing network to help with saliency detection. Moreover, the low light conditions in the dataset were in direct contrast (pun intended) to the PlantVillage dataset used to train for diseased plants. As a result, we had to implement a few image preprocessing methods, including something that's been used for plant health detection in the past: eulerian magnification. ## Accomplishments that We're Proud of Training a pytorch model at a hackathon and sending sensor data from the STM Nucleo board to Azure IoT Hub and Twilio SMS. ## What We Learned When your model doesn't do what you want it to, hyperparameter tuning shouldn't always be the go to option. There might be (in this case, was) some intrinsic aspect of the model that needed to be looked over. ## What's next for Intelligent Agriculture Analytics with IoT Edge
## Inspiration We wanted to create a proof-of-concept for a potentially useful device that could be used commercially and at a large scale. We ultimately designed to focus on the agricultural industry as we feel that there's a lot of innovation possible in this space. ## What it does The PowerPlant uses sensors to detect whether a plant is receiving enough water. If it's not, then it sends a signal to water the plant. While our proof of concept doesn't actually receive the signal to pour water (we quite like having working laptops), it would be extremely easy to enable this feature. All data detected by the sensor is sent to a webserver, where users can view the current and historical data from the sensors. The user is also told whether the plant is currently being automatically watered. ## How I built it The hardware is built on an Arduino 101, with dampness detectors being used to detect the state of the soil. We run custom scripts on the Arduino to display basic info on an LCD screen. Data is sent to the websever via a program called Gobetwino, and our JavaScript frontend reads this data and displays it to the user. ## Challenges I ran into After choosing our hardware, we discovered that MLH didn't have an adapter to connect it to a network. This meant we had to work around this issue by writing text files directly to the server using Gobetwino. This was an imperfect solution that caused some other problems, but it worked well enough to make a demoable product. We also had quite a lot of problems with Chart.js. There's some undocumented quirks to it that we had to deal with - for example, data isn't plotted on the chart unless a label for it is set. ## Accomplishments that I'm proud of For most of us, this was the first time we'd ever created a hardware hack (and competed in a hackathon in general), so managing to create something demoable is amazing. One of our team members even managed to learn the basics of web development from scratch. ## What I learned As a team we learned a lot this weekend - everything from how to make hardware communicate with software, the basics of developing with Arduino and how to use the Charts.js library. Two of our team member's first language isn't English, so managing to achieve this is incredible. ## What's next for PowerPlant We think that the technology used in this prototype could have great real world applications. It's almost certainly possible to build a more stable self-contained unit that could be used commercially.
# Easy-garden Machine Learning model to take care of your plants easily. ## Inspiration Most of the people like having plants in their home and office, because hey are beautiful and can connect us with nature just a little bit. But most of the time we really don´t take care of them, and they could get sick. Thinking of this, a system that can monitor the health of plants and tell you if one of them got a disease could be helpful. The system needs to be capturing images at real time and then classify it in diseased or healthy, in case of a disease it can notify you or even provide a treatment for the plant. ## What it does It is a maching learning model that takes an input image and classify it into healthy or disease, displaying the result on the screen. ## How I build it I use datasets of plants healthy and disease found in PlantVillage and developed a machine learning model in Tensorflow using keras API, getting either healthy or diseased. The data set consisted in 1943 images are of category diseased and 1434 images are of category healthy. The size of each image is different so the image dimension. Most of the images are in jpeg but also contains some images in .png and gif. To feed the maching learning model, it was needed to convert each pixel of the RGB color images to a pixel with value between 0 and 1, and resize all the images to a dimension of 170 x 170. I use Tensorflow to feed the data to neural network, and created 3 datasets with different distributions of data, Training: 75%, Valid: 15% and Testing: 10%. ## Challenge I ran into Testing a few differents models of convolution neural networks, there were so many different results and was a little difficult to adapt to another architecture at first, ending with the model Vgg16, that was pre-trained on imagenet but we still change the architecture a little bit it was posible to retrain the model just a bit before it ends. App building continue to be a challenge but I have learned a lot about it, since I has no experience on that, and trying to combine it with AR was very difficult. ## What I learned I learned a lot more neural networks models that haven't used before as well as some API's very useful to develop the idea, some shortcuts to display data and a lot about plant diseases. I learn some basic things about Azure and Kinvy platforms.
winning
## Inspiration * New ways of interacting with games (while VR is getting popular, there is not anything that you can play without a UI right now) * Fully text-based game from the 80's * Mental health application of choose your own adventure games ## What it does * Natural language processing using Alexa * Dynamic game-play based on choices that user makes * Integrates data into ## How I built it * Amazon Echo (Alexa) * Node.js * D3.js ## Challenges I ran into * Visualizing the data from the game in a meaningful and interesting way as well as integrating that into the mental health theme * Story-boarding (i.e. coming up with a short, sweet, and interesting plot that would get the message of our project across) ## Accomplishments that I'm proud of * Being able to finish a demo-able project that we can further improve in the future; all within 36 hours * Using new technologies like NLP and Alexa * Working with a group of awesome developers and designers from all across the U.S. and the world ## What I learned * I learned how to pick and choose the most appropriate APIs and libraries to accomplish the project at hand * How to integrate the APIs into our project in a meaningful way to make UX interesting and innovative * More experience with different JavaScript frameworks ## What's next for Sphinx * Machine learning or AI integration in order to make a more versatile playing experience
### Friday 7PM: Setting Things into Motion 🚶 > > *Blast to the past - for everyone!* > > > ECHO enriches the lives of those with memory-related issues through reminiscence therapy. By recalling beloved memories from their past, those with dementia, Alzheimer’s and other cognitive conditions can restore their sense of continuity, rebuild neural pathways, and find fulfillment in the comfort of nostalgia. ECHO enables an AI-driven analytical approach to find insights into a patient’s emotions and recall, so that caregivers and family are better-equipped to provide. ### Friday 11PM: Making Strides 🏃‍♂️ > > *The first step, our initial thoughts* > > > When it came to wrangling the frontend, we kept our users in mind and knew our highest priority was creating an application that was intuitive and easy to understand. We designed with the idea that ECHO could be seamlessly integrated into everyday life in mind. ### Saturday 9AM: Tripping 🤺 > > *Whoops! Challenges and pitfalls* > > > As with any journey, we faced our fair share of obstacles and roadblocks on the way. While there were no issues finding the right APIs and tools to accomplish what we wanted, we had to scour different forums and tutorials to figure out how we could integrate those features. We built ECHO with Next.js and deployed on Vercel (and in the process, spent quite a few credits spamming a button while the app was frozen..!). Backend was fairly painless, but frontend was a different story. Our vision came to life on Figma and was implemented with HTML/CSS on the ol’ reliable, VSC. We were perhaps a little too ambitious with the mockup and so removed a couple of the bells and whistles. ### Saturday 4PM: Finding Our Way 💪 > > *One foot in front of the other - learning new things* > > > From here on out, we were in entirely uncharted territory and had to read up on documentation. Our AI, the Speech Prosody model from Hume, allowed us to take video input from a user and analyze a user’s tone and face in real-time. We learned how to use websockets for streaming APIs for those quick insights, as opposed to a REST API which (while more familiar to us) would have been more of a handful due to our real-time analysis goals. ### Saturday 10PM: What Brand Running Shoes 👟 > > *Our tech stack* > > > Nikes. Apart from the tools mentioned above, we have to give kudos to the platforms that we used for the safe-keeping of assets. To handle videos, we linked things up to Cloudinary so that users can play back old memories and reminisce, and used Postgres for data storage. ### Sunday 7AM: The Final Stretch 🏁 > > *The power of friendship* > > > As a team composed of two UWaterloo CFM majors and a WesternU Engineering major, we had a lot of great ideas between us. When we put our heads together, we combined powers and developed ECHO. Plus, Ethan very graciously allowed us to marathon this project at his house! Thank you for the dumplings. ### Sunday Onward: After Sunrise 🌅 > > *Next horizons* > > > With this journey concluded, ECHO’s next great adventure will come in the form of adding cognitive therapy activities to stimulate the memory in a different way, as well as AI transcript composition (along with word choice analysis) for our recorded videos.
## Inspiration People struggle to work effectively in a home environment, so we were looking for ways to make it more engaging. Our team came up with the idea for InspireAR because we wanted to design a web app that could motivate remote workers be more organized in a fun and interesting way. Augmented reality seemed very fascinating to us, so we came up with the idea of InspireAR. ## What it does InspireAR consists of the website, as well as a companion app. The website allows users to set daily goals at the start of the day. Upon completing all of their goals, the user is rewarded with a 3-D object that they can view immediately using their smartphone camera. The user can additionally combine their earned models within the companion app. The app allows the user to manipulate the objects they have earned within their home using AR technology. This means that as the user completes goals, they can build their dream office within their home using our app and AR functionality. ## How we built it Our website is implemented using the Django web framework. The companion app is implemented using Unity and Xcode. The AR models come from echoAR. Languages used throughout the whole project consist of Python, HTML, CSS, C#, Swift and JavaScript. ## Challenges we ran into Our team faced multiple challenges, as it is our first time ever building a website. Our team also lacked experience in the creation of back end relational databases and in Unity. In particular, we struggled with orienting the AR models within our app. Additionally, we spent a lot of time brainstorming different possibilities for user authentication. ## Accomplishments that we're proud of We are proud with our finished product, however the website is the strongest component. We were able to create an aesthetically pleasing , bug free interface in a short period of time and without prior experience. We are also satisfied with our ability to integrate echoAR models into our project. ## What we learned As a team, we learned a lot during this project. Not only did we learn the basics of Django, Unity, and databases, we also learned how to divide tasks efficiently and work together. ## What's next for InspireAR The first step would be increasing the number and variety of models to give the user more freedom with the type of space they construct. We have also thought about expanding into the VR world using products such as Google Cardboard, and other accessories. This would give the user more freedom to explore more interesting locations other than just their living room.
losing
## Inspiration: We were inspired by the inconvenience faced by novice artists creating large murals, who struggle to use reference images for guiding their work. It can also help give confidence to young artists who need a confidence boost and are looking for a simple way to replicate references. ## What it does An **AR** and **CV** based artist's aid that enables easy image tracing and color blocking guides (almost like "paint-by-numbers"!) It achieves this by allowing the user to upload an image of their choosing, which is then processed into its traceable outlines and dominant colors. These images are then displayed in the real world on a surface of the artist's choosing, such as paper or a wall. ## How we built it The base for the image processing functionality (edge-detection and color blocking) were **Python, OpenCV, numpy** and the **K-means** grouping algorithm. The image processing module was hosted on **Firebase**. The end-user experience was driven using **Unity**. The user uploads an image to the app. The image is ported to Firebase, which then returns the generated images. We used the Unity engine along with **ARCore** to implement surface detection and virtually position the images in the real world. The UI was also designed through packages from Unity. ## Challenges we ran into Our biggest challenge was the experience level of our team with the tech stack we chose to use. Since we were all new to Unity, we faced several bugs along the way and had to slowly learn our way through the project. ## Accomplishments that we're proud of We are very excited to have demonstrated the accumulation of our image processing knowledge and to make contributions to Git. ## What we learned We learned that our aptitude lies lower level, in robust languages like C++, as opposed to using pre-built systems to assist development, such as Unity. In the future, we may find easier success building projects to refine our current tech stacks as opposed to expanding them. ## What's next for [AR]t After Hack the North, we intend to continue the project using C++ as the base for AR, which is more familiar to our team and robust.
## 💡 Our Mission Create an intuitive game but tough game that gets its players to challenge their speed & accuracy. We wanted to incorporate an active element to the game so that it can be played guilt free! ## 🧠 What it does It shows a sequence of scenes before beginning the game, including the menu and instructions. After a player makes it past the initial screens, the game begins where a wall with a cutout starts moving towards the player. The player can see both the wall and themselves positioned on the environment, as the wall appears closer, the player must mimic the shape of the cutout to make it past the wall. The more walls you pass, the faster and tougher the walls get. The highest score with 3 lives wins! ## 🛠️ How we built it We built the model to detect the person with their webcam using Movenet and built a custom model using Angle Heuristics to estimate similarity between users and expected pose. We built the game using React for the front end, designed the scenes and assets and built the backend using python flask. ## 🚧 Challenges we ran into We were excited about trying out Unity, so we spent a around 10-12 trying to work with it. However, it was a lot more complex than we initially thought, and decided to pivot to building the UI using react towards the end of the first day. Although we became lot more familiar with working with Unity, and the structure of 2D games, it proved to be more difficult than we anticipated and had to change our gameplan to build a playable game. ## 🏆 Accomplishments that we're proud of Considering that we completely changed our tech stack at around 1AM on the second day of hacking, we are proud that we built a working product in a extremely tight timeframe. ## 📚What we learned This was the first time working with Unity for all of us. We got a surface level understanding of working with Unity and how game developers structure their games. We also explored graphic design to custom design the walls. Finally, working with an Angles Heuristics model was interesting too. ## ❓ What's next for Wall Guys Next steps would be improve the UI and multiplayer!
## Inspiration' One of our team members saw two foxes playing outside a small forest. Eager he went closer to record them, but by the time he was there, the foxes were gone. Wishing he could have recorded them or at least gotten a recording from one of the locals, he imagined a digital system in nature. With the help of his team mates, this project grew into a real application and service which could change the landscape of the digital playground. ## What it does It is a social media and educational application, which it stores the recorded data into a digital geographic tag, which is available for the users of the app to access and playback. Different from other social platforms this application works only if you are at the geographic location where the picture was taken and the footprint was imparted. In the educational part, the application offers overlays of monuments, buildings or historical landscapes, where users could scroll through historical pictures of the exact location they are standing. The images have captions which could be used as instructional and educational and offers the overlay function, for the user to get a realistic experience of the location on a different time. ## How we built it Lots of hours of no sleep and thousands of git-hubs push and pulls. Seen more red lines this weekend than in years put together. Used API's and tons of trial and errors, experimentation's and absurd humour and jokes to keep us alert. ## Challenges we ran into The app did not want to behave, the API's would give us false results or like in the case of google vision, which would be inaccurate. Fire-base merging with Android studio would rarely go down without a fight. The pictures we recorded would be horizontal and load horizontal, even if taken in vertical. The GPS location and AR would cause issues with the server, and many more we just don't want to recall... ## Accomplishments that we're proud of The application is fully functional and has all the basic features we planned it to have since the beginning. We got over a lot of bumps on the road and never gave up. We are proud to see this app demoed at Penn Apps XX. ## What we learned Fire-base from very little experience, working with GPS services, recording Longitude and Latitude from the pictures we taken to the server, placing digital tags on a spacial digital map, using map box. Working with the painful google vision to analyze our images before being available for service and located on the map. ## What's next for Timelens Multiple features which we would love to have done at Penn Apps XX but it was unrealistic due to time constraint. New ideas of using the application in wider areas in daily life, not only in education and social networks. Creating an interaction mode between AR and the user to have functionality in augmentation.
losing
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration Public speaking is greatly feared by many, yet it is a part of life that most of us have to go through. Despite this, preparing for presentations effectively is *greatly limited*. Practicing with others is good, but that requires someone willing to listen to you for potentially hours. Talking in front of a mirror could work, but it does not live up to the real environment of a public speaker. As a result, public speaking is dreaded not only for the act itself, but also because it's *difficult to feel ready*. If there was an efficient way of ensuring you aced a presentation, the negative connotation associated with them would no longer exist . That is why we have created Speech Simulator, a VR web application used for practice public speaking. With it, we hope to alleviate the stress that comes with speaking in front of others. ## What it does Speech Simulator is an easy to use VR web application. Simply login with discord, import your script into the site from any device, then put on your VR headset to enter a 3D classroom, a common location for public speaking. From there, you are able to practice speaking. Behind the user is a board containing your script, split into slides, emulating a real powerpoint styled presentation. Once you have run through your script, you may exit VR, where you will find results based on the application's recording of your presentation. From your talking speed to how many filler words said, Speech Simulator will provide you with stats based on your performance as well as a summary on what you did well and how you can improve. Presentations can be attempted again and are saved onto our database. Additionally, any adjustments to the presentation templates can be made using our editing feature. ## How we built it Our project was created primarily using the T3 stack. The stack uses **Next.js** as our full-stack React framework. The frontend uses **React** and **Tailwind CSS** for component state and styling. The backend utilizes **NextAuth.js** for login and user authentication and **Prisma** as our ORM. The whole application was type safe ensured using **tRPC**, **Zod**, and **TypeScript**. For the VR aspect of our project, we used **React Three Fiber** for rendering **Three.js** objects in, **React XR**, and **React Speech Recognition** for transcribing speech to text. The server is hosted on Vercel and the database on **CockroachDB**. ## Challenges we ran into Despite completing, there were numerous challenges that we ran into during the hackathon. The largest problem was the connection between the web app on computer and the VR headset. As both were two separate web clients, it was very challenging to communicate our sites' workflow between the two devices. For example, if a user finished their presentation in VR and wanted to view the results on their computer, how would this be accomplished without the user manually refreshing the page? After discussion between using web-sockets or polling, we went with polling + a queuing system, which allowed each respective client to know what to display. We decided to use polling because it enables a severless deploy and concluded that we did not have enough time to setup websockets. Another challenge we had run into was the 3D configuration on the application. As none of us have had real experience with 3D web applications, it was a very daunting task to try and work with meshes and various geometry. However, after a lot of trial and error, we were able to manage a VR solution for our application. ## What we learned This hackathon provided us with a great amount of experience and lessons. Although each of us learned a lot on the technological aspect of this hackathon, there were many other takeaways during this weekend. As this was most of our group's first 24 hour hackathon, we were able to learn to manage our time effectively in a day's span. With a small time limit and semi large project, this hackathon also improved our communication skills and overall coherence of our team. However, we did not just learn from our own experiences, but also from others. Viewing everyone's creations gave us insight on what makes a project meaningful, and we gained a lot from looking at other hacker's projects and their presentations. Overall, this event provided us with an invaluable set of new skills and perspective. ## What's next for VR Speech Simulator There are a ton of ways that we believe can improve Speech Simulator. The first and potentially most important change is the appearance of our VR setting. As this was our first project involving 3D rendering, we had difficulty adding colour to our classroom. This reduced the immersion that we originally hoped for, so improving our 3D environment would allow the user to more accurately practice. Furthermore, as public speaking infers speaking in front of others, large improvements can be made by adding human models into VR. On the other hand, we also believe that we can improve Speech Simulator by adding more functionality to the feedback it provides to the user. From hand gestures to tone of voice, there are so many ways of differentiating the quality of a presentation that could be added to our application. In the future, we hope to add these new features and further elevate Speech Simulator.
## Inspiration Public speaking is a critical skill in our lives. The ability to communicate effectively and efficiently is a very crucial, yet difficult skill to hone. For a few of us on the team, having grown up competing in public speaking competitions, we understand too well the challenges that individuals looking to improve their public speaking and presentation skills face. Building off of our experience of effective techniques and best practices and through analyzing the speech patterns of very well-known public speakers, we have designed a web app that will target weaker points in your speech and identify your strengths to make us all better and more effective communicators. ## What it does By analyzing speaking data from many successful public speakers from a variety industries and backgrounds, we have established relatively robust standards for optimal speed, energy levels and pausing frequency during a speech. Taking into consideration the overall tone of the speech, as selected by the user, we are able to tailor our analyses to the user's needs. This simple and easy to use web application will offer users insight into their overall accuracy, enunciation, WPM, pause frequency, energy levels throughout the speech, error frequency per interval and summarize some helpful tips to improve their performance the next time around. ## How we built it For the backend, we built a centralized RESTful Flask API to fetch all backend data from one endpoint. We used Google Cloud Storage to store files greater than 30 seconds as we found that locally saved audio files could only retain about 20-30 seconds of audio. We also used Google Cloud App Engine to deploy our Flask API as well as Google Cloud Speech To Text to transcribe the audio. Various python libraries were used for the analysis of voice data, and the resulting response returns within 5-10 seconds. The web application user interface was built using React, HTML and CSS and focused on displaying analyses in a clear and concise manner. We had two members of the team in charge of designing and developing the front end and two working on the back end functionality. ## Challenges we ran into This hackathon, our team wanted to focus on creating a really good user interface to accompany the functionality. In our planning stages, we started looking into way more features than the time frame could accommodate, so a big challenge we faced was firstly, dealing with the time pressure and secondly, having to revisit our ideas many times and changing or removing functionality. ## Accomplishments that we're proud of Our team is really proud of how well we worked together this hackathon, both in terms of team-wide discussions as well as efficient delegation of tasks for individual work. We leveraged many new technologies and learned so much in the process! Finally, we were able to create a good user interface to use as a platform to deliver our intended functionality. ## What we learned Following the challenge that we faced during this hackathon, we were able to learn the importance of iteration within the design process and how helpful it is to revisit ideas and questions to see if they are still realistic and/or relevant. We also learned a lot about the great functionality that Google Cloud provides and how to leverage that in order to make our application better. ## What's next for Talko In the future, we plan on continuing to develop the UI as well as add more functionality such as support for different languages. We are also considering creating a mobile app to make it more accessible to users on their phones.
winning
## Motivation Our motivation was a grand piano that has sat in our project lab at SFU for the past 2 years. The piano was Richard Kwok's grandfathers friend and was being converted into a piano scroll playing piano. We had an excessive amount of piano scrolls that were acting as door stops and we wanted to hear these songs from the early 20th century. We decided to pursue a method to digitally convert the piano scrolls into a digital copy of the song. The system scrolls through the entire piano scroll and uses openCV to convert the scroll markings to individual notes. The array of notes are converted in near real time to an MIDI file that can be played once complete. ## Technology The scrolling through the piano scroll utilized a DC motor control by arduino via an H-Bridge that was wrapped around a Microsoft water bottle. While the notes were recorded using openCV via a Raspberry Pi 3, which was programmed in python. The result was a matrix representing each frame of notes from the Raspberry Pi camera. This array was exported to an MIDI file that could then be played. ## Challenges we ran into The openCV required a calibration method to assure accurate image recognition. The external environment lighting conditions added extra complexity in the image recognition process. The lack of musical background in the members and the necessity to decrypt the piano scroll for the appropriate note keys was an additional challenge. The image recognition of the notes had to be dynamic for different orientations due to variable camera positions. ## Accomplishments that we're proud of The device works and plays back the digitized music. The design process was very fluid with minimal set backs. The back-end processes were very well-designed with minimal fluids. Richard won best use of a sponsor technology in a technical pickup line. ## What we learned We learned how piano scrolls where designed and how they were written based off desired tempo of the musician. Beginner musical knowledge relating to notes, keys and pitches. We learned about using OpenCV for image processing, and honed our Python skills while scripting the controller for our hack. As we chose to do a hardware hack, we also learned about the applied use of circuit design, h-bridges (L293D chip), power management, autoCAD tools and rapid prototyping, friction reduction through bearings, and the importance of sheave alignment in belt-drive-like systems. We also were exposed to a variety of sensors for encoding, including laser emitters, infrared pickups, and light sensors, as well as PWM and GPIO control via an embedded system. The environment allowed us to network with and get lots of feedback from sponsors - many were interested to hear about our piano project and wanted to weigh in with advice. ## What's next for Piano Men Live playback of the system
## Inspiration In the theme of sustainability, we noticed that a lot of people don't know what's recyclable. Some people recycle what shouldn't be and many people recycle much less than they could be. We wanted to find a way to improve recycling habits while also incentivizing people to recycle more. Cyke, pronounced psych,(psyched about recycling) was the result. ## What it does Psych is a platform to get users in touch with local recycling facilities, to give recycling facilities more publicity, and to reward users for their good actions. **For the user:** When a user creates an account for Cyke, their location is used to tell them what materials are able to be recycled and what materials aren't. Users are given a Cyke Card which has their rank. When a user recycles, the amount they recycled is measured and reported to Cyke, which stores that data in our CochroachDB database. Then, based on revenue share from recycling plants, users would be monetarily rewarded. The higher the person's rank, the more they receive for what they recycle. There are four ranks, ranging from "Learning" to "Superstar." \**For Recycling Companies: \** For a recycling company to be listed on our website, they must agree to a revenue share corresponding to the amount of material recycled (can be discussed). This would be in return for guiding customers towards them and increasing traffic and recycling quality. Cyke provides companies with an overview of how well recycling is going: statistics over the past month or more, top individual contributors to their recycling plant, and an impact score relating to how much social good they've done by distributing money to users and charities. Individual staff members can also be invited to the Cyke page to view these statistics and other more detailed information. ## How we built it Our site uses a **Node.JS** back-end, with **ejs** for the server-side rendering of pages. The backend connects to **CockroachDB** to store user and company information, recycling transactions, and a list of charities and how much has been donated to each. ## Challenges we ran into We ran into challenges mostly with CockroachDB, one of us was able to successfully create a cluster and connect it via the MacOS terminal, however when it came to connecting it to our front-end there is existed a lot of issues with getting the right packages for the linux CLI as well as for connecting via our connection string. We spent quite a few hours on this as using CockroachDB serverless was an essential part of hosting info about our recyclers, recycling companies, transactions, and charities. ## Accomplishments that we're proud of We’re proud of getting CockroachDB to function properly. For two of the three members on the team this was our first time using a Node.js back-end, so it was difficult and rewarding to complete. On top of being proud of getting our SQL database off the ground, we’re proud of our design. We worked a lot on the colors. We are also proud of using the serverless form of CockroachDB so our compute cluster is hosted google's cloud platform (GCP). ## What we've learned Through some of our greatest challenges came some of our greatest learning advances. Through toiling through the CockroachDB and SQL table, of which none of us had previous experience with before, we learned a lot about environment variables and how to use express and pg driver to connect front-end and back-end elements. ## What's next for Cyke To scale our solution, the next steps involve increasing personalization aspects of our application. For users that means, adding in capabilities that highlight local charities for users to donate to, and locale based recycling information. On the company side, there are optimizations that can be made around the information that we provide them, thus improving the impact score to consider more factors like how consistent their users are.
## Inspiration We were inspired by our shared love of dance. We knew we wanted to do a hardware hack in the healthcare and accessibility spaces, but we weren't sure of the specifics. While we were talking, we mentioned how we enjoyed dance, and the campus DDR machine was brought up. We decided to incorporate that into our hardware hack with this handheld DDR mat! ## What it does The device is oriented so that there are LEDs and buttons that are in specified directions (i.e. left, right, top, bottom) and the user plays a song they enjoy next to the sound sensor that activates the game. The LEDs are activated randomly to the beat of the song and the user must click the button next to the lit LED. ## How we built it The team prototyped the device for the Arduino UNO with the initial intention of using a sound sensor as the focal point and slowly building around it, adding features where need be. The team was only able to add three features to the device due to the limited time span of the event. The first feature the team attempted to add was LEDs that reacted to the sound sensor, so it would activate LEDs to the beat of a song. The second feature the team attempted to add was a joystick, however, the team soon realized that the joystick was very sensitive and it was difficult to calibrate. It was then replaced by buttons that operated much better and provided accessible feedback for the device. The last feature was an algorithm that added a factor of randomness to LEDs to maximize the "game" aspect. ## Challenges we ran into There was definitely no shortage of errors while working on this project. Working with the hardware on hand was difficult, the team was nonplussed whether the issue on hand stemmed from the hardware or an error within the code. ## Accomplishments that we're proud of The success of the aforementioned algorithm along with the sound sensor provided a very educational experience for the team. Calibrating the sound sensor and developing the functional prototype gave the team the opportunity to utilize prior knowledge and exercise skills. ## What we learned The team learned how to work within a fast-paced environment and experienced working with Arduino IDE for the first time. A lot of research was dedicated to building the circuit and writing the code to make the device fully functional. Time was also wasted on the joystick due to the fact the values outputted by the joystick did not align with the one given by the datasheet. The team learned the importance of looking at recorded values instead of blindly following the datasheet. ## What's next for Happy Fingers The next steps for the team are to develop the device further. With the extra time, the joystick method could be developed and used as a viable component. Working on delay on the LED is another aspect, doing client research to determine optimal timing for the game. To refine the game, the team is also thinking of adding a scoring system that allows the player to track their progress through the device recording how many times they clicked the LED at the correct time as well as a buzzer to notify the player they had clicked the incorrect button. Finally, in a true arcade fashion, a display that showed the high score and the player's current score could be added.
partial
## Inspiration As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system ## What it does We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning ## How we built it Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle ## Challenges we ran into One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly ## Accomplishments that we're proud of Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight! ## What we learned We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future ## What's next for Anti-Bumper Car - A Collision Avoidance System Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
## Inspiration As OEMs(Original equipment manufacturers) and consumers keep putting on brighter and brighter lights, this can be blinding for oncoming traffic. Along with the fatigue and difficulty judging distance, it becomes increasingly harder to drive safely at night. Having an extra pair of night vision would be essential to protect your eyes and that's where the NCAR comes into play. The Nighttime Collision Avoidance Response system provides those extra sets of eyes via an infrared camera that uses machine learning to classify obstacles in the road that are detected and projects light to indicate obstacles in the road and allows safe driving regardless of the time of day. ## What it does * NCAR provides users with an affordable wearable tech that ensures driver safety at night * With its machine learning model, it can detect when humans are on the road when it is pitch black * The NCAR alerts users of obstacles on the road by projecting a beam of light onto the windshield using the OLED Display * If the user’s headlights fail, the infrared camera can act as a powerful backup light ## How we built it * Machine Learning Model: Tensorflow API * Python Libraries: OpenCV, PyGame * Hardware: (Raspberry Pi 4B), 1 inch OLED display, Infrared Camera ## Challenges we ran into * Training machine learning model with limited training data * Infrared camera breaking down, we had to use old footage of the ml model ## Accomplishments that we're proud of * Implementing a model that can detect human obstacles from 5-7 meters from the camera * building a portable design that can be implemented on any car ## What we learned * Learned how to code different hardware sensors together * Building a Tensorflow model on a Raspberry PI * Collaborating with people with different backgrounds, skills and experiences ## What's next for NCAR: Nighttime Collision Avoidance System * Building a more custom training model that can detect and calculate the distances of the obstacles to the user * A more sophisticated system of alerting users of obstacles on the path that is easy to maneuver * Be able to adjust the OLED screen with a 3d printer to display light in a more noticeable way
## Inspiration We noticed one of the tracks involved creating a better environment for cities through the use of technology, also known as making our cities 'smarter.' We observed in places like Boston & Cambridge, there are many intersections with unsafe areas for pedestrians and drivers. **Furthermore, 50% of all accidents occur at Intersections, according to the Federal Highway Administration**. This can prove to be enhanced with careless drivers, lack of stop signs, confusing intersections, and more. ## What it does This project uses a Raspberry Pi to predict potential dangerous driving situations. If we deduce that a potential collision can occur, our prototype will start creating a 'beeping' sound loud enough to gain the attention of those surrounding the scene. Ideally, our prototype will be attached onto traffic poles, similar to most traffic cameras. ## How we built it We utilized a popular Computer Vision library known as OpenCV, in order to visualize our problem in Python. A demo of our prototype is shown in the GitHub repository, with a beeping sound occurring when the program finds a potential collision. Our demonstration is built using Raspberry Pi & a Logitech Camera. Using Artificial Intelligence, we capture the current positions of cars, and calculate their direction and velocity. Using this information, we predicted potential close calls and accidents. In such a case, we make a beeping sound simulating a alarm to notify drivers and surrounding participants. ## Challenges we ran into One challenge we ran into was detecting the car positions based on the frames in a reliable fashion. A second challenge was calculating the speed and direction of vehicles based on the present frame & the previous frames. A third challenge included being able to determine if two lines are crossing based on their respective starting and ending coordinates. Solving this proved vital in order to make sure we alerted those in the vicinity in a quick and proper manner. ## Accomplishments that we're proud of We are proud that we were able to adapt this project to multiple levels. Even putting the camera up to a screen of a real collision video off Youtube resulted in the prototype alerting us of a potential crash **before the accident occurred**. We're also proud of the fact that we were able to abstract the hardware and make the layout of the final prototype aesthetically pleasing. ## What we learned We learned about the potential of smart intersections, and the benefits it can provide in terms of safety to an ever advancing society. Surely, our implementation will be able to reduce the 50% of collisions that occur at intersections by making those around the area more aware of potential dangerous collisions. We also learned a lot about working with openCV and Camera Vision. This was definitely a unique experience, and we were even able to walk around the surrounding Harvard campus, trying to get good footage to test our model on. ## What's next for Traffic Eye We think we could make a better prediction model, as well as creating a weather resilient model to account for varying types of weather throughout the year. We think a prototype like this can be scaled and placed on actual roads given enough R&D is done. This definitely can help our cities advance with rising capabilities in Artificial Intelligence & Computer Vision!
winning
## Inspiration Bill - "Blindness is a major problem today and we hope to have a solution that takes a step in solving this" George - "I like engineering" We hope our tool gives nonzero contribution to society. ## What it does Generates a description of a scene and reads the description for visually impaired people. Leverages CLIP/recent research advancements and own contributions to solve previously unsolved problem (taking a stab at the unsolved **generalized object detection** problem i.e. object detection without training labels) ## How we built it SenseSight consists of three modules: recorder, CLIP engine, and text2speech. ### Pipeline Overview Once the user presses the button, the recorder beams it to the compute cluster server. The server runs a temporally representative video frame through the CLIP engine. The CLIP engine is our novel pipeline that emulates human sight to generate a scene description. Finally, the generated description is sent back to the user side, where the text is converted to audio to be read. [Figures](https://docs.google.com/presentation/d/1bDhOHPD1013WLyUOAYK3WWlwhIR8Fm29_X44S9OTjrA/edit?usp=sharing) ### CLIP CLIP is a model proposed by OpenAI that maps images to embeddings via an image encoder and text to embeddings via a text encoder. Similiar (image, text) pairs will have a higher dot product. ### Image captioning with CLIP We can map the image embeddings to text embeddings via a simple MLP (since image -> text can be thought of as lossy compression). The mapped embedding is fed into a transformer decoder (GPT2) that is fine-tuned to produce text. This process is called CLIP text decoder. ### Recognition of Key Image Areas The issue with Image captioning the fed input is that an image is composed of smaller images. The CLIP text decoder is trained on only images containing one single content (e.g. ImageNet/MS CoCo images). We need to extract the crops of the objects in the image and then apply CLIP text decoder. This process is called **generalized object detection** **Generalized object detection** is unsolved. Most object detection involves training with labels. We propose a viable approach. We sample crops in the scene, just like how human eyes dart around their view. We evaluate the fidelity of these crops i.e. how much information/objects the crop contains by embedding the crop using clip and then searching a database of text embeddings. The database is composed of noun phrases that we extracted. The database can be huge, so we rely on SCANN (Google Research), a pipeline that uses machine learning based vector similarity search. We then filter all subpar crops. The remaining crops are selected using an algorithm that tries to maximize the spatial coverage of k crop. To do so, we sample many sets of k crops and select the set with the highest all pairs distance. ## Challenges we ran into The hackathon went smoothly, except for the minor inconvenience of getting the server + user side to run in sync. ## Accomplishments that we're proud of Platform replicates the human visual process with decent results. Subproblem is generalized object detection-- proposed approach involving CLIP embeddings and fast vector similarity search Got hardware + local + server (machine learning models on MIT cluster) + remote apis to work in sync ## What's next for SenseSight Better clip text decoder. Crops tend to generate redundant sentences, so additional pruning is needed. Use GPT3 to remove the redundancy and make the speech flower. Realtime can be accomplished by using real networking protocols instead of scp + time.sleep hacks. To accelerate inference on crops, we can do multi GPU. ## Fun Fact The logo is generated by DALL-E :p
## Inspiration The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities ## What it does The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame ## How I built it We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well. ## Challenges I ran into The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene. ## What I learned I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network. ## What's next for Let Me See We want to further improve our analysis and reduce our analyzing time.
## Inspiration We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output. ## What it does XTS takes an **X** and turns it **T**o **S**peech. ## How we built it We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program. ## Challenges we ran into While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions. ## Accomplishments that we're proud of We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program. ## What we learned We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python.
winning
## Inspiration Protogress has been developed to improve the urban planning process. As cities expand, urban planners and engineers need to have accurate data on the areas to be developed or redeveloped. To assist urban planners, Protogress is available to gather various information such as noise pollution, light intensity, and temperature to provide a better picture of the area. With its modular IoT design, it has the ability to provide low-cost preliminary surveys. ## What it does Protogress utilizes two Arduino 101’s to realize an IoT network through various sensors to gather data on noise, light, human movement, and temperature throughout areas as small as individual homes to large areas like entire cities. The Arduinos form a network by communicating via Bluetooth Low Energy Technology. Through this network, Protogress is able to record and transmit data from our physical sensor network to the database to be displayed on a Google Maps interface. ## How It's Built **Frontend** The front end of the website was developed using MEAN stack to create intensity zones through a Google Maps API. It extracts the data from the Protogress Database gathered through one of our peripheral devices and uses the data on Google maps. **Backend** The Protogress Database uses MongoDB to store the data obtained the physical sensor network. The Central Arduino requests information from the peripheral devices, and documents the information in a Python script. In this script, the data is quantized into a value that is sensible to humans and than sent to be stored in the Protogress Database. **Peripherals** The Protogress IoT Network uses the Arduino 101 board to record data from its sensors and store onto our database. In our demo network, there are two Arduinos, the Sensor and Central. The Sensor will acquire constant analog signals from the sensors and is connected to the network through the built in Bluetooth Low Energy system on the Arduino. Central is connected to the internet through serial communication and a laptop. It can be set to gather information from the nearby Sensor as frequently as needed, however it is currently set request every ten seconds to minimize power consumption. Upon receiving data from the Sensor, it is recorded by a Python script to be uploaded to our database. ## Challenges We faced several challenges when developing Protogress; Integration of google maps API with Angular, the quantiation of the sensors, and the Bluetooth communication of the Arduino. ## Accomplishments Our greatest accomplishments were the successful quantization of sensor data, transmission of sensor data through Bluetooth Low Energy and our implementation of Google Maps API to display our data. ## What Was learned We learned about Mean Stack development and how to incoporate it with the local host and the quantization issues with Arduino Groove kits. ## What's next for Protogress Protogress can be modified for a variety of services, our next steps include adjustments of different sensors, creating a larger network with more devices, and developing a network that can be displayed real-time. Some applications include furthering the range of capabilities such as an pollution detector, and the possibility of permanent systems integrated with city infrastructures. This system demonstrates proof of concept and we envision Protogress to be realized with even lower cost microcontrollers and packaged in a sleek modular design. With the addition of an air quality sensor Protogress can be used to monitor pollution emitted from heavy industrial zones. Protogress can also be used as a natural disaster sensor system with a vibration sensor or a rain sensor. With these sensors the Protogress can be placed on buildings or other structures to detect vibrations within buildings using vibration sensors or even sway of buildings. Ideally, Protogress will continue to be improved as a device made to assist in providing safety and allowing efficient development of entire communities.
## Inspiration: Home security systems are very expensive and sometimes do not function as intended. Sometimes something simple may happen such as you forgetting the lights on at home or there may be something more drastic such as a large temperature change or even intruder. Our solution aims to be a cheap alert system that would detect three parameters and offer an alert to the user. ## What it does: Our project detects light, temperature and sounds and sends the necessary message to the user. Light sensors would be use to tell the user if they forgot their lights on and hence send an alert to the user. Temperature detection would be use to send drastic changes in temperature or sound to the user as alert messages which may include extreme cold in winter or extreme heat in summer. Sound detection would be used as a security system as it is configured to send alerts to the user once a certain decibel level is reached. Therefore very loud sounds such as breaking glass, shouting or even a gunshot may be detected and an alert sent to the user. These messages are all sent to the user's phone. If anything is wrong, there is a circuit with a red LED light that lights up whenever there is a situation. If the LED is off, the user gets no messages and everything is okay at home. Our project also associates user friendly colors with conditions for example heat is red and cold would be blue. ## How we built it: We used an Arduino as well as a Grove Kit in order to obtain sensors. These sensors were connected to the Arduino and we also attached a breadboard that would receive an input from the Arduino. We coded the entire project and uploaded it unto the chip. We then used an adapter to transfer the input from the Arduino to our phones and tested the output to ensure it worked. ## Challenges we ran into: Unfortunately there was a lack of hardware at our disposal. We wanted to implement bluetooth technology to send data to our phones without wires and even tweet weather alerts. However there was no bluetooth hardware components so we were unable to achieve this. Instead we just used an adapter to connect the arduino to our phone and show a test output. Testing was also an issue since we were not able to generate extreme cold and warm weathers so we had to change our code to test these parameters. ## Accomplishments that we're proud of: We had very little experience in using Grove Kits and were able to figure out a way to implement our project. Also we were able to change our original idea due to there being a limitation of bluetooth and WiFi shield components. ## What we learned: We learned how to use and code the sensors in a Grove Kit. We also improved our knowledge of Arduino and building circuits. ## What's next for Home Automation and Security: Future improvements and Modifications to improvements would be using bluetooth and WiFi to send twitter alerts to people on the user's contact list. In the future we may also include more components to the circuit for example installing a remote button that can contact the police in the case of there being an intruder. We may also install other types of sensors such as touch sensors that may be placed on a welcome mat or door handle during long periods away from home. Code: # include # include "rgb\_lcd.h" # include rgb\_lcd lcd; float temperature; //stores temperature int lightValue; //stores light value int soundValue; //stores sound value bool errorTemp = false; bool errorLight = false; bool errorSound = false; bool errorTempCold = false; bool errorTempHot = false; int lights = 0; int cold = 0; int hot = 0; int intruder = 0; const int B = 4275; const int R0 = 100000; const int pinTempSensor = A0; const int pinLightSensor = A1; const int pinSoundSensor = A2; const int pinLEDRed = 9; const int pinLEDGreen = 8; void setup() { lcd.begin(16, 2); Serial.begin(9600); } void loop() { temperature = 0; temp(); //function that detects the temperature light(); //function that detects light sound(); //function that detects sounds lightMessages(); //function that checks conditions temperatureMessages(); //function that outputs everything to the user ok(); //function that ensures all parameters are correctly calculated and tested serialErrors(); //function that checks logic and sends data to output function } void light() { lightValue = analogRead(pinLightSensor); } void sound() { soundValue = analogRead(pinSoundSensor); //Serial.println(soundValue); if(soundValue > 500) { errorSound = true; } else { errorSound = false; } } void temp() { int a = analogRead(pinTempSensor); float R = 1023.0/((float)a)-1.0; R = R0\*R; temperature = 1.0/(log(R/R0)/B+1/298.15)-303.14; // convert to temperature via datasheet delay(100); } void blinkLED() { analogWrite(pinLEDRed, HIGH); delay(500); analogWrite(pinLEDRed, LOW); delay(500); } void greenLED() { analogWrite(pinLEDGreen, HIGH); } void screenRed() { lcd.setRGB(255,0,0); } void screenBlue() { lcd.setRGB(0,0,255); } void screenNormal() { lcd.setRGB(0,50,50); } void serialErrors() { if (errorSound == false) { if (errorLight == true) { cold = 0; hot = 0; intruder = 0; if(lights == 0) { Serial.println("Important: Lights are on at home!"); lights++; } else { Serial.print(""); } } else if (errorTempCold == true) { lights = 0; hot = 0; intruder = 0; if(cold == 0) { Serial.println("Important: The temperature at home is low!"); cold++; } else { Serial.print(""); } } else if (errorTempHot == true) { lights = 0; cold = 0; intruder = 0; if(hot == 0){ Serial.println("Important: The temperature at home is high!"); hot++; } else { Serial.print(""); } } } else { lights = 0; cold = 0; hot = 0; if(intruder == 0) { Serial.println("IMPORTANT: There was a very loud sound at home! Possible intruder."); intruder++; } else { Serial.print(""); } } } void ok() { if(errorSound == false) { if (errorTemp == false && errorLight == false) { lcd.clear(); analogWrite(pinLEDGreen, HIGH); lcd.setCursor(0, 0); lcd.print("Everything is ok"); lcd.setCursor(1,1); lcd.print("Temp = "); lcd.print(temperature); lcd.print("C"); screenNormal(); } } } void lightMessages() { if(lightValue > 500) { lcd.clear(); lcd.setCursor(0, 0); lcd.print("Lights are on!"); screenRed(); blinkLED(); errorLight = true; } else { errorLight = false; } } void temperatureMessages() { if (errorSound == false) { if (temperature < 20) { lcd.clear(); lcd.setCursor(0,1); lcd.print("Extreme Cold!"); screenBlue(); blinkLED(); errorTemp = true; errorTempCold = true; errorTempHot = false; } else if (temperature > 30) { lcd.clear(); lcd.setCursor(0,1); lcd.print("Extreme Heat!"); screenRed(); blinkLED(); errorTemp = true; errorTempHot = true; errorTempCold = false; } else { errorTemp = false; errorTempHot = false; errorTempCold = false; } } else { lcd.clear(); lcd.setCursor(0,0); lcd.print("LOUD SOUND"); lcd.setCursor(0,1); lcd.print("DETECTED!"); screenRed(); blinkLED(); delay(5000); if (soundValue < 500) { errorSound = false; } else { errorSound = true; } } }
## Inspiration As most of our team became students here at the University of Waterloo, many of us had our first experience living in a shared space with roommates. Without the constant nagging by parents to clean up after ourselves that we found at home and some slightly unorganized roommates, many shared spaces in our residences and apartments like kitchen counters became cluttered and unusable. ## What it does CleanCue is a hardware product that tracks clutter in shared spaces using computer vision. By tracking unused items taking up valuable counter space and making speech and notification reminders, CleanCue encourages roommates to clean up after themselves. This product promotes individual accountability and respect, repairing relationships between roommates, and filling the need some of us have for nagging and reminders by parents. ## How we built it The current iteration of CleanCue is powered by a Raspberry Pi with a Camera Module sending a video stream to an Nvidia CUDA enabled laptop/desktop. The laptop is responsible for running our OpenCV object detection algorithms, which enable us to log how long items are left unattended and send appropriate reminders to a speaker or notification services. We used Cohere to create unique messages with personality to make it more like a maternal figure. Additionally, we used some TTS APIs to emulate a voice of a mother. ## Challenges we ran into Our original idea was to create a more granular product which would customize decluttering reminders based on the items detected. For example, this version of the product could detect perishable food items and make reminders to return items to the fridge to prevent food spoilage. However, the pre-trained OpenCV models that we used did not have enough variety in trained items and precision to support this goal, so we settled for this simpler version for this limited hackathon period. ## Accomplishments that we're proud of We are proud of our planning throughout the event, which allowed us to both complete our project while also enjoying the event. Additionally, we are proud of how we broke down our tasks at the beginning, and identified what our MVP was, so that when there were problems, we knew what our core priorities were. Lastly, we are glad we submitted a working project to Hack the North!!!! ## What we learned The core frameworks that our project is built out of were all new to the team. We had never used OpenCV or Taipy before, but had a lot of fun learning these tools. We also learned how to create improvised networking infrastructure to enable hardware prototyping in a public hackathon environment. Though not on the technical side, we also learned the importance of re-assessing if our solution actually was solving the problem we were intending to solve throughout the project and make necessary adjustments based on what we prioritized. Also, this was our first hardware hack! ## What's next for CleanCue We definitely want to improve our prototype to be able to more accurately describe a wide array of kitchen objects, enabling us to tackle more important issues like food waste prevention. Further, we also realized that the technology in this project can also aid individuals with dementia. We would also love to explore more in the mobile app development space. We would also love to use this to notify any dangers within the kitchen, for example, a young child getting too close to the stove, or an open fire left on for a long time. Additionally, we had constraints based on hardware availability, and ideally, we would love to use an Nvidia Jetson based platform for hardware compactness and flexibility.
losing
# nwfacts [![license](https://img.shields.io/github/license/adrianosela/nwfacts.svg)](https://github.com/adrianosela/nwfacts/blob/master/LICENSE) [![Generic badge](https://img.shields.io/badge/nwfacts.tech-GREEN.svg)](https://nwfacts.tech) The ultimate anti-bias tool for browsing the news. ## Contents * [Aim and Motivations](#project-aim-and-motivations) * [High Level Design](#design-specification) * [Monetization](#means-to-monetization) ## Project Aim and Motivations All humans are susceptible to a large number of well-understood [congnitive biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases). These biases ultimately impact how we see and understand the world. This is an [nwHacks](https://www.nwhacks.io/) 2020 project which aims to empower everyone to browse news articles consciously, scoring sources for measurable bias indicators such as sensational language and non-neutral sentiment. Our final product is the result of the following secondary goals: * Create something simple that makes the world a slightly better place by fighting misinformation, aligning with [Mozilla's campaign](https://foundation.mozilla.org/en/campaigns/eu-misinformation/) * Explore the use of new technologies + [StdLib](https://stdlib.com/)'s AutoCode feature (in beta testing at the moment) + Google Cloud Platform's [Cloud Functions](https://cloud.google.com/functions/) + Google Cloud Platform's [Natural Language](https://cloud.google.com/natural-language/) processing + Delegating and managing DNS for multiple domains with [Domain.com](https://domain.com) * Leverage team members' (very distinct) skills without having to settle for a single programming language by employing a microservice-like architecture, where different components are fully isolated and modular * Take a shot at winning prizes! We have focused on featured challenges from Google Cloud, StdLib, and Domain.com ## Design Specification ### **System Architecture Diagram:** ![](https://raw.githubusercontent.com/adrianosela/nwfacts/master/docs/diagrams/architecture.png) ### **Components Specification:** * **Keyword Processing Server (Golang)** + Receives keyword queries from HTTP clients + Fetches relevant news article URLs using the free [NewsAPI](https://newsapi.org/) + Parses articles' contents using our homegrown article-parsing Cloud Function + Runs several algorithmic and integrated third-party API bias-measuring functions (mostly [Natural Language Processing](https://en.wikipedia.org/wiki/Natural_language_processing) which gives us metrics that can help us understand the legitimacy, intent, and biases associated with a piece of text) + Returns article metadata along with relevant metric scores back to the client + \*Caches article results by URL due to the expensive nature of text and ML processing * **Keyword Processing Client (ReactJS)** + Landing page style UI with a simple keyword search + Styled cards where each card contains releavant metadata and bias-metrics for a single article + Processing results export-to-CSV functionality * **Google Cloud Function: Article HTML-to-Text Parsing (Python)** + Receives a list of URLs from HTTP clients + Uses the [Newspaper3k](https://newspaper.readthedocs.io/en/latest/) library to extract body text from an article given its URL + Returns a populated map of URL-to-body (text of article) back to the client * **Serverless StdLib Function: Analytics-Export Flow (NodeJS)** + Receives raw result data from an HTTP client, which is our web application + Converts raw data onto a user-friendly CSV file + A) AutoCode built-in Slack integration that publishes the CSV to Slack + B) AutoCode custom integration for sending the CSV to a given email * **Serverless StdLib Function: Relevant Tweets Search (NodeJS)** + Receives keywords to search for from an HTTP client + Returns relevant tweets back to the client Note that our Golang server and React front-end are both hosted on Google App Engine. ## Means to Monetization The website [nwfacts.tech](https://nwfacts.tech) is and will remain free whenever it is running. Eventually we could consider adding premium account functionality with access to more computationally expensive machine learning.
## Inspiration In a sense, social media has democratized news media itself -- through it, we have all become "news editors" to some degree, shaping what our friends read through our shares, likes, and comments. Is it any wonder, then, that "fake news" has become such a widespread problem? In such partisan times, it is easy to find ourselves ourselves siloed off within ideological echo chambers. After all, we are held in thrall not only by our cognitive biases to seek out confirmatory information, but also by the social media algorithms trained to feed such biases for the sake of greater ad revenue. Most worryingly, these ideological silos can serve as breeding grounds for fake news, as stories designed to mislead their audience are circulated within the target political community, building outrage and exacerbating ignorance with each new share. We believe that the problem of fake news is intimately related to the problem of the ideological echo chambers we find ourselves inhabiting. As such, we designed "Open Mind" to attack these two problems at their root. ## What it does "Open Mind" is a Google Chrome extension designed to (1) combat the proliferation of fake news, and (2) increase exposure to opposing viewpoints. It does so using a multifaceted approach -- first, it automatically "blocks" known fake news websites from being displayed on the user's browser, providing the user with a large warning screen and links to more reputable sources (the user can always click through to view the allegedly fake content, however; we're not censors!). Second, the user is given direct feedback on how partisan their reading patterns are, in the form of a dashboard which tracks their political browsing history. This dashboard then provides a list of recommended articles that users can read in order to "balance out" their reading history. ## How we built it We used React for the front end, and a combination of Node.js and Python for the back-end. Our machine learning models for recommending articles were built using Python's Tensorflow library, and NLP was performed using the Alyien, Semantria, and Google Cloud Natural Language APIs. ## What we learned We learned a great deal more about fake news, and NLP in particular. ## What's next for Open Mind We aim to implement a "political thermometer" that appears next to political articles, showing the degree to which the particular article is conservative or liberal. In addition, we aim to verify a Facebook-specific "share verification" feature, where users are asked if they are sure they want to share an article that they have not already read (based on their browser history).
## Inspiration 🤔 The brain, the body's command center, orchestrates every function, but damage to this vital organ in contact sports often goes unnoticed. Studies show that 99% of football players are diagnosed with CTE, 87% of boxers have experienced at least one concussion, and 15-30% of hockey injuries are brain-related. If only there were a way for players and coaches to monitor the brain health of players before any long-term damage can occur. ## Our Solution💡 Impactify addresses brain health challenges in contact sports by integrating advanced hardware into helmets used in sports like hockey, boxing, and football. This hardware records all impacts sustained during training or games, capturing essential data from each session. The collected data provides valuable insights into an athlete's brain health, enabling them to monitor and assess their cognitive well-being. By staying informed about potential head injuries or concussion risks, athletes can take proactive measures to protect their health. Whether you're a player who wants to track their own brain health or a coach who wants to track all their players' brain health, Impactify has a solution for both. ## How we built it 🛠️ Impactify leverages a mighty stack of technologies to optimize its development and performance. React was chosen for the front end due to its flexibility in building dynamic, interactive user interfaces, allowing for a seamless and responsive user experience. Django powers the backend, providing a robust and scalable framework for handling complex business logic, API development, and secure authentication. PostgreSQL was selected for data storage because of its reliability, advanced querying capabilities, and easy handling of large datasets. Last but not least, Docker was employed to manage dependencies across multiple devices. This helped maintain uniformity in the development and deployment processes, reducing the chances of environment-related issues. On the hardware side, we used an ESP32 microprocessor connected to our team member's mobile hotspot, allowing the microprocessor to send data over the internet. The ESP32 was then connected to 4 pressure sensors and an accelerometer, where it reads the data at fixed intervals. The data is sent over the internet to our web server for further processing. The parts were then soldered together and neatly packed into our helmet, and we replaced all the padding to make the helmet wearable again. The hardware was powered with a 9V battery, and then LEDs and a power switch were added to the helmet so the user could turn it on and off. The LEDs served as a visual indicator of whether or not the ESP32 had an internet connection. ## Challenges we ran into 💥 The first challenge we had was getting all the sensors and components positioned in the correct locations within the helmet such that the data will be read accurately. On top of getting the correct positioning, the wiring and all the components must be put in place in such a way that it does not detract from the protective aspect of the helmet. Getting all the components hidden properly and securely was a great challenge and took hours of tinkering. Another challenge that we faced was making sure that the data that was being read was accurate. We took a long time to callibrate the pressure sensors inside the helmet, because when the helmet is being worn, your head naturally excerts some pressure on the sides of the helmet. Making sure that our data input was reliable was a big challenge to overcome because we had to iterate multiple times on tinkering with the helmet, collecting data, and plotting it on a graph to visually inspect it, before we were satisfied with the result. ## Accomplishments that we're proud of 🥂 We are incredibly proud of how we turned our vision into a reality. Our team successfully implemented key features such as pressure and acceleration tracking within the helmet, and our software stack is robust and scalable with a React frontend and Django backend. We support individual user sessions and coach user management for sports teams, and have safety features such as sending an SMS to a coach if their player takes excessive damage. We developed React components that visualize the collected data, making the website easy to use, visually appealing and interactive. The hardware design was compact and elegant, seamlessly fitting into the helmet without compromising its structure. ## What we learned 🧠 Throughout this project, we learned a great deal about hardware integration, data visualization, and balancing safety with functionality. We also gained invaluable insights into optimizing the development process and managing complex technical challenges. ## What's next for Impactify 🔮 Moving forward, we aim to enhance the system by incorporating more sophisticated data analysis, providing even deeper insights into brain health aswell as fitting our hardware into a larger array of sports gear. We plan to expand the use of Impactify into more sports and further improve its ease of use for athletes and coaches alike. Additionally, we will explore ways to miniaturize the hardware even further to make the integration even more seamless.
partial
# Team Honeycrisp # Inspiration Every year there are dozens of heatstroke accidents that occur, a number of which are defined as vehicular heatstroke accidents. Our aim was to build a device for vehicles to best prevent these scenarios, whether there may be young children or pets left in said vehicles. # What it does Detector that detects temperature/environment conditions within a car and presence of any living being, so as to alert the owner when the environment reaches dangerous conditions for any living beings inside the vehicle (babies, pets, ...) # How the Detector Works The detector makes use of several sensors to determine whether the environmental conditions within a vehicle have reached dangerous levels, and whether there is presence of a living being within the vehicle. In the case where both are true it is to send a text message to the owner of the car warning them about the situation within the vehicle. # How we built it A team of 3 people made use of the Particle Electron board and several sensors (Gas sensors, Thermal sensors, Infrared Motion sensor as well as Audio Sensor) to create the project. # Challenges we faced There were challenges faced when dealing with the Particle Electron board, in that the sensors being used were made for an Arduino. This required specific libraries, which eventually caused the Particle Electron board to malfunction. # Accomplishments The team has no past experience working with a Particle Electron board, so for the work that was accomplished within the 24 hour span, we consider it a success. # What we learned We learned a lot about the Particle Electron board as well as the sensors that were utilized for this project # Future Future developments to improve our device further would include: 1. Considering sensors with more precision to ensure that the conditions and parameters being monitored are as precise as required. 2. Implement multiple emergency measures, in the case where reaching the owner becomes difficult or the conditions within the vehicle have reached alarming levels: a. Turning on the A/C of the vehicle b. Cracking the window slightly open for better circulation. c. Have the vehicle make noise (either via the alarm system or car horn) to gain the attention of any passerby or individuals within reasonable distances to call for aid. d. Function that reports the incidence to 911, along with the location of the vehicle.
## Inspiration Coming from South-East Asia, we have seen the devastation that natural disasters can wreck havoc on urban populations We wanted to create a probe that can assist on-site Search and Rescue team members to detect and respond to nearby survivors ## What it does Each Dandelyon probe detects changes in its surroundings and pushes data regularly to the backend server. Additionally, each probe has a buzzer that produces a noise if it detects changes in the environment to attract survivors. Using various services, visualise data from all probes at the same time to investigate and determine areas of interest to rescue survivors. ## What it consists of * Deployable IoT Probe * Live data streams * Data Visualisation on Microsoft Power BI * Data Visualisation on WebApp with Pitney Bowes API(dandelyon.org) ## How we built it **Hardware** * Identified the sensors that we would be using * Comprises of: 1. Cell battery 2. Breadboard 3. Jumper Wires 4. Particle Electron 2G (swapped over to our own Particle 3G as it has better connectivity) + Cellular antenna 5. GPS + external antenna 6. Sound detector sensor 7. Buzzer 8. Accelerometer * Soldered pin headers onto sensors * Tested the functionality of each sensor 1. Wired each sensor alone to the Electron 2. Downloaded the open source libraries for each sensor from GitHub 3. Wrote a code for main function for the sensor to communicate with the Electron 4. Read the output from each sensor and check if it's working * Integrated every sensor with the Electron * Tested the final functionality of the Electron **Software** * Infrastructure used 1. Azure IoT Hub 2. Azure Stream Analytics 3. Azure NoSQL 1. Microsoft Power BI 4. Google Cloud Compute 1. Particle Cloud with Microsoft Azure IoT Hub integration * Backend Development 1. Flow of live data stream from Particle devices 2. Supplement live data with simulated data 3. Data is piped from Azure IoT Hub to PowerBI and Webapp Backend 4. PowerBI used to display live dashboards with live charts 5. WebApp displays map with live data * WebApp Development Deployed NodeJS server on Google Cloud Compute connected to Azure NoSQL database. Fetches live data for display on map. ## Challenges we ran into Hardware Integration Azure IoT Stream connecting to PowerBI as well as our custom back-end Working with live data streams ## Accomplishments that we're proud of Integrating the Full Hardware suite Integrating Probe -> Particle Cloud -> Azure IoT -> Azure Stream Analytics -> PowerBI and Azure Stream Analytics -> Azure NoSQL -> Node.Js -> PitneyBowes/Leaflet ## What we learned ## What's next for Dandelyon Prototyping the delivery shell used to deploy Dandelyon probes from a high altitude Developing on the backend interface used to manage and assign probe response
## Inspiration Inspired by carbon trading mechanism among nations proposed by Kyoto Protocol treaty in the response to the threat of climate change, and a bunch of cute gas sensors provided by MLH hardware lab, we want to build a similar mechanism among people to monetize our daily carbon emission rights, especially the vehicle carbon emission rights so as to raise people's awareness of green house gas(GHG) emission and climate change. ## What it does We have designed a data platform for both regular users and the administrative party to manage carbon coins, a new financial concept we proposed, that refers to monetized personal carbon emission rights. To not exceed the annual limit of carbon emission, the administrative party will assign a certain amount of carbon coins to each user on a monthly/yearly basis, taking into consideration both the past carbon emission history and the future carbon emission amount predicted by machine learning algorithms. For regular users, they can monitor their real-time carbon coin consumption and trading carbon coins with each other once logging into our platform. Also, we designed a prototyped carbon emission measurement device for vehicles that includes a CO2 gas sensor, and an IoT system that can collect vehicle's carbon emission data and transmit these real-time data to our data cloud platform. ## How we built it ### Hardware * Electronics We built a real-time IoT system with Photon board that calculates the user carbon emission amount based on gas sensors’ input and update the right amount of account payable in their accounts. The Photon board processes the avarage concentration for the time of change from CO2 and CO sensors, and then use the Particle Cloud to publish the value to the web page. * 3D Priniting We designed the 3D structure for the eletronic parts. This strcture is meant to be attached to the end of the car gas pipe to measure the car carbon emission, whcih is one of the biggest emission for an average household. Similar structure design will be done for other carbon emission sources like heaters, air-conditioners as well in the future. ### Software * Back end data analysis We built a Long Short Term Memory(LSTM) model using Keras, a high-level neural networks API running on top of TensorFlow, to do time series prediction. Since we did not have enough carbon emission data in hand, we trained and evaluated our model on a energy consumption dataset, cause we found there is a strong correlation between the energy consumption data and the carbon emission data. Through this deep learning model, we can make a sound prediction of the carbon emission amount of the next month/year from the past emission history. * Front end web interface We built Web app where the user can access the real-time updates of their carbon consumption and balance, and the officials can suggest the currency value change based on the machine learning algorithm results shown in their own separate web interface. ## Challenges we ran into * Machine learning algorithms At first we have no clue about what kind of model should we use for time series prediction. After googling for a while, we found recurrent neural networks(RNN) that takes a history of past data points as input into the model is a common way for time series prediction, and its advanced variant, LSTM model has overcome some drawbacks of RNN. However, even for LSTM, we still have many ways to use this model: we have sequence-to-sequence prediction, sequence-to-one prediction and one-to-sequence prediction. After some failed experiments and carefully researching on the characteristics of our problem, finally we got a well-performed sequence-to-one LSTM model for energy consumption prediction. * Hardware We experience some technical difficulty when using the 3D printing with Ultimaker, but eventually use the more advanced FDM printer and get the part done. The gas sensor also takes us quite a while to calibrate and give out the right price based on consumption. ## Accomplishments that we're proud of It feels so cool to propose this cool financial concept that can our planet a better place to live. Though we only have 3 people, we finally turn tons of caffeine into what we want! ## What we learned Sleep and Teamwork!! ## What's next for CarbonCoin 1) Expand sources of carbon emission measurements using our devices or convert other factors like electricity consumption into carbon emission as well. The module will be in the future incorprate into all the applicances. 2) Set up trading currency functionality to ensure the liquidity of CarbonCoin. 3) Explore the idea of blockchain usage on this idea
partial
## Inspiration Our inspiration stems from a fundamental realization about the critical role food plays in our daily lives. We've observed a disparity, especially in the United States, where the quality and origins of food are often overshadowed, leading to concerns about the overall health impact on consumers. Several team members had the opportunity to travel to regions where food is not just sustenance but a deeply valued aspect of life. In these places, the connection between what we eat, our bodies, and the environment is highly emphasized. This experience ignited a passion within us to address the disconnect in food systems, prompting the creation of a solution that brings transparency, traceability, and healthier practices to the forefront of the food industry. Our goal is to empower individuals to make informed choices about their food, fostering a healthier society and a more sustainable relationship with the environment. ## What it does There are two major issues that this app tries to address. The first is directed to those involved in the supply chain, like the producers, inspectors, processors, distributors, and retailers. The second is to the end user. For those who are involved in making the food, each step that moves on in the supply chain is tracked by the producer. For the consumer at the very end who will consume it, it will be a journey on where the food came from including its location, description, and quantity. Throughout its supply chain journey, each food shipment will contain a label that the producer will put on first. This is further stored on the blockchain for guaranteed immutability. As the shipment moves from place to place, each entity ("producer, processor, distributor, etc") will be allowed to make its own updated comment with its own verifiable signature and decentralized identifier (DiD). We did this through a unique identifier via a QR code. This then creates tracking information on that one shipment which will eventually reach the end consumer, who will be able to see the entire history by tracing a map of where the shipment has been. ## How we built it In order to build this app, we used both blockchain and web2 in order to alleviate some of the load onto different servers. We wrote a solidity smart contract and used Hedera in order to guarantee the immutability of the record of the shipment, and then we have each identifier guaranteed its own verifiable certificate through its location placement. We then used a node express server that incorporated the blockchain with our SQLite database through Prisma ORM. We finally used Firebase to authenticate the whole app together in order to provide unique roles and identifiers. In the front end, we decided to build a react-native app in order to support both Android and iOS. We further used different libraries in order to help us integrate with QR codes and Google Maps. Wrapping all this together, we have a fully functional end-to-end user experience. ## Challenges we ran into A major challenge that we ran into was that Hedera doesn't have any built-in support for constructing arrays of objects through our solidity contract. This was a major limitation as we had to find various other ways to ensure that our product guaranteed full transparency. ## Accomplishments that we're proud of These are some of the accomplishments that we can achieve through our app * Accurate and tamper-resistant food data * Efficiently prevent, contain, or rectify contamination outbreaks while reducing the loss of revenue * Creates more transparency and trust in the authenticity of Verifiable Credential data * Verifiable Credentials help eliminate and prevent fraud ## What we learned We learned a lot about the complexity of food chain supply. We understand that this issue may take a lot of helping hand to build out, but it's really possible to make the world a better place. To the producers, distributors, and those helping out with the food, it helps them prevent outbreaks by keeping track of certain information as the food shipments transfer from one place to another. They will be able to efficiently track and monitor their food supply chain system, ensuring trust between parties. The consumer wants to know where their food comes from, and this tool will be perfect for them to understand where they are getting their next meal to stay strong and fit. ## What's next for FoodChain The next step is to continue to build out all the different moving parts of this app. There are a lot of different directions that one person can take app toward the complexity of the supply chain. We can continue to narrow down to a certain industry or we can make this inclusive using the help of web2 + web3. We look forward to utilizing this at some companies where they want to prove that their food ingredients and products are the best.
## Inspiration The counterfeiting industry is anticipated to grow to $2.8 trillion in 2022 costing 5.4 million jobs. These counterfeiting operations push real producers to bankruptcy as cheaper knockoffs with unknown origins flood the market. In order to solve this issue we developed a blockchain powered service with tags that uniquely identify products which cannot be faked or duplicated while also giving transparency. As consumers today not only value the product itself but also the story behind it. ## What it does Certi-Chain uses a python based blockchain to authenticate any products with a Certi-Chain NFC tag. Each tag will contain a unique ID attached to the blockchain that cannot be faked. Users are able to tap their phones on any product containing a Certi-Chain tag to view the authenticity of a product through the Certi-Chain blockchain. Additionally if the product is authentic users are also able to see where the products materials were sourced and assembled. ## How we built it Certi-Chain uses a simple python blockchain implementation to store the relevant product data. It uses a proof of work algorithm to add blocks to the blockchain and check if a blockchain is valid. Additionally, since this blockchain is decentralized, nodes (computers that host a blockchain) have to be synced using a consensus algorithm to decide which version of the blockchain from any node should be used. In order to render web pages, we used Python Flask with our web server running the blockchain to fetch relative information from the blockchain and displayed it to the user in a style that is easy to understand. A web client to input information into the chain was also created using Flask to communicate with the server. ## Challenges we ran into For all of our group members this project was one of the toughest we had. The first challenge we ran into was that once our idea was decided we quickly realized only one group member had the appropriate hardware to test our product in real life. Additionally, we deliberately chose an idea in which none of us had experience in. This meant we had to spent a portion of time to understand concepts such as blockchain and also using frameworks like flask. Beyond the starting choices we also hit several roadblocks as we were unable to get the blockchain running on the cloud for a significant portion of the project hindering the development. However, in the end we were able to effectively rack our minds on these issues and achieve a product that exceeded our expectations going in. In the end we were all extremely proud of our end result and we all believe that the struggle was definitely worth it in the end. ## Accomplishments that we're proud of Our largest achievement was that we were able to accomplish all our wishes for this project in the short time span we were given. Not only did we learn flask, some more python, web hosting, NFC interactions, blockchain and more, we were also able to combine these ideas into one cohesive project. Being able to see the blockchain run for the first time after hours of troubleshooting was a magical moment for all of us. As for the smaller wins sprinkled through the day we were able to work with physical NFC tags and create labels that we stuck on just about any product we had. We also came out more confident in the skills we already knew and also developed new skills we gained on the way. ## What we learned In the development of Certi-Check we learnt so much about blockchains, hashes, encryption, python web frameworks, product design, and also about the counterfeiting industry too. We came into the hackathon with only a rudimentary idea what blockchains even were and throughout the development process we came to understand the nuances of blockchain technology and security. As for web development and hosting using the flask framework to create pages that were populated with python objects was certainly a learning curve for us but it was a learning curve that we overcame. Lastly, we were all able to learn more about each other and also the difficulties and joys of pursuing a project that seemed almost impossible at the start. ## What's next for Certi-Chain Our team really believes that what we made in the past 36 hours can make a real tangible difference in the world market. We would love to continue developing and pursuing this project so that it can be polished for real world use. This includes us tightening the security on our blockchain, looking into better hosting, and improving the user experience for anyone who would tap on a Certi-Chain tag.
## Inspiration In Canada, $31 billion worth of food ends up in the landfills every year. This comes from excess waste produced by unwanted ugly produce, restaurants overcooking etc. As a team we are concerned about the environment and the role we have to play in shaping a healthy future. While technology often has adverse effects on the environment, there is opportunity to reshape the world through utilizing the power of human connection that technology enables us to do across physical boundaries. ## What it does SAVOURe is a web application with a proposed mobile design that strives to save excess edible food from the landfill by connecting stores, restaurants and events with hungry to people and those in need, who save the food from the landfill through purchasing it at a discounted rate. In particular our app would benefit students and people with lower income who struggle to eat enough each day. It is built for the consumer and food provider perspectives. We created a database for the providers to quickly post to the community about excess food at their location. Providers specify the type of food, the discounted price, the time it’s available as well as any other specifications. For the consumer side, we propose an mobile application that allows for quick browsing and purchasing of the food. They have the opportunity to discover nearby providers through an integrated map and once they purchase the food online, they can retrieve their food from the store locker (this alleviates any additional monitoring required by employees). ## How we built it We started off with an ideation phase using colourful crayola markers and paper to hash out the essence of our idea. We then agreed upon what the minimum viable product was, the ability for browsing and restaurants to enter data to post food. From there we divided up the work into frontend, backend and ui/ux design. ## Challenges we ran into One challenge we had was contraining the scope of our project. We brainstormed a lot of useful functionality that we believed would be useful for our user base, from creating “template” postings, to scanning QR code to access storage lockers that would contain the food items. Implementing all of this would have been impossible in the time frame, so we had to decide on a minimal set of functionality, and move many ideas to the “nice-to-have” column. ## Accomplishments that we're proud of We are proud that we were trying to solve a problem that would make the world a better place. Not only are we creating a functional app, but we are also putting our skills to use for the improvement of humankind. ## What we learned We learned that we can accomplish creating a solution when we are all heading in the same direction and combine our minds to think together utilizing each of our own strengths. ## What's next for SAVOURe There was a lot of functionality we weren’t able to implement in the given time frame. So there is still plenty of ideas to add into the app. To make the app useful though, we would contact potential food providers, to get a set of discount postings to bring in a customer base.
partial
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration The need for faster and more reliable emergency communication in remote areas inspired the creation of FRED (Fire & Rescue Emergency Dispatch). Whether due to natural disasters, accidents in isolated locations, or a lack of cellular network coverage, emergencies in remote areas often result in delayed response times and first-responders rarely getting the full picture of the emergency at hand. We wanted to bridge this gap by leveraging cutting-edge satellite communication technology to create a reliable, individualized, and automated emergency dispatch system. Our goal was to create a tool that could enhance the quality of information transmitted between users and emergency responders, ensuring swift, better informed rescue operations on a case-by-case basis. ## What it does FRED is an innovative emergency response system designed for remote areas with limited or no cellular coverage. Using satellite capabilities, an agentic system, and a basic chain of thought FRED allows users to call for help from virtually any location. What sets FRED apart is its ability to transmit critical data to emergency responders, including GPS coordinates, detailed captions of the images taken at the site of the emergency, and voice recordings of the situation. Once this information is collected, the system processes it to help responders assess the situation quickly. FRED streamlines emergency communication in situations where every second matters, offering precise, real-time data that can save lives. ## How we built it FRED is composed of three main components: a mobile application, a transmitter, and a backend data processing system. ``` 1. Mobile Application: The mobile app is designed to be lightweight and user-friendly. It collects critical data from the user, including their GPS location, images of the scene, and voice recordings. 2. Transmitter: The app sends this data to the transmitter, which consists of a Raspberry Pi integrated with Skylo’s Satellite/Cellular combo board. The Raspberry Pi performs some local data processing, such as image transcription, to optimize the data size before sending it to the backend. This minimizes the amount of data transmitted via satellite, allowing for faster communication. 3. Backend: The backend receives the data, performs further processing using a multi-agent system, and routes it to the appropriate emergency responders. The backend system is designed to handle multiple inputs and prioritize critical situations, ensuring responders get the information they need without delay. 4. Frontend: We built a simple front-end to display the dispatch notifications as well as the source of the SOS message on a live-map feed. ``` ## Challenges we ran into One major challenge was managing image data transmission via satellite. Initially, we underestimated the limitations on data size, which led to our satellite server rejecting the images. Since transmitting images was essential to our product, we needed a quick and efficient solution. To overcome this, we implemented a lightweight machine learning model on the Raspberry Pi that transcribes the images into text descriptions. This drastically reduced the data size while still conveying critical visual information to emergency responders. This solution enabled us to meet satellite data constraints and ensure the smooth transmission of essential data. ## Accomplishments that we’re proud of We are proud of how our team successfully integrated several complex components—mobile application, hardware, and AI powered backend—into a functional product. Seeing the workflow from data collection to emergency dispatch in action was a gratifying moment for all of us. Each part of the project could stand alone, showcasing the rapid pace and scalability of our development process. Most importantly, we are proud to have built a tool that has the potential to save lives in real-world emergency scenarios, fulfilling our goal of using technology to make a positive impact. ## What we learned Throughout the development of FRED, we gained valuable experience working with the Raspberry Pi and integrating hardware with the power of Large Language Models to build advanced IOT system. We also learned about the importance of optimizing data transmission in systems with hardware and bandwidth constraints, especially in critical applications like emergency services. Moreover, this project highlighted the power of building modular systems that function independently, akin to a microservice architecture. This approach allowed us to test each component separately and ensure that the system as a whole worked seamlessly. ## What’s next for FRED Looking ahead, we plan to refine the image transmission process and improve the accuracy and efficiency of our data processing. Our immediate goal is to ensure that image data is captioned with more technical details and that transmission is seamless and reliable, overcoming the constraints we faced during development. In the long term, we aim to connect FRED directly to local emergency departments, allowing us to test the system in real-world scenarios. By establishing communication channels between FRED and official emergency dispatch systems, we can ensure that our product delivers its intended value—saving lives in critical situations.
*Inspiration* One of the most important roles in our current society is the one taken by the varying first responders who ensure the safety of the public through many different means. Innovation which could help these first responders would always be favourable for society as it would follow from this innovation that more lives are saved through a more efficient approach by first responders to save lives. *What it does* The Watchdog app is a map which allows registered users to share locations of events which would be important to first responders. These events could have taken place at any time, as the purpose for them varies for the different first responders. For example, if many people report fires then from looking at the map, regardless of when these fires took place, firefighters can locate building complexes which might be prone to fire for some particular reason. It may be that firefighters can find where to pay more attention to as there would be a higher probability for these locations to have fires statistically. This app does not only help firefighters with these statistics, but also the police and paramedics. With reporting of petty crimes such as theft, police can find neighbourhoods where there is a statistical accumulation and focus resources there to improve efficiency. The same would go for paramedics for varying types of accidents which could occur from dangerous jobs such as construction, and paramedics would be more prepared for certain locations. Major cities have many delays due to accidents or other hindrances to travel and these delays are usually unavoidable and a nuisance to city travelers, so the app could also help typical citizens. *How we built it* The app was built using MongoDB, Express, and Node on the backend to manage the uploading of all reports added to the MongoDB database. React was used on the front end along with Google Cloud to generate the map using Google Maps API which the user can interact with, by adding their reports and viewing other reports. *Challenges we ran into* Our challenges mostly involved working with Google Maps API as doing so in React was new for all of us. Issues arose when trying to make the map interactable and using the map's features to add locations to the database, as we never worked with the map like this before. However, these challenges were overcome by learning the Google Maps documentation as well as we could, and ensuring that the features we wanted were added even if they were still simple. *Accomplishments that we're proud of* We're mostly proud of coming up with an idea that we believe could have a strong impact in the world when it comes to helping lives and being efficient with the limited time that first responders have. Technically, accomplishments with being able to make the map interactive despite limited experience with Google Maps API was something that we're proud of as well. *What we learned* We learned how to work with React and Google Maps API together, along with how to move data from interactive maps like that to an online database in MongoDB. *What's next for Watchdog* Watchdog can add features when it comes to creating reports. Features could vary, such as pictures of the incident or whether first responders were successful in preventing these incidents. The app is already published online so it can be used by people, and a main goal would be to make a mobile version so that more people could use it, even though it can be used by people right now.
winning
## Inspiration Save plate is an app that focuses on narrowing the equity differences in society.It is made with the passion to solve the SDG goals such as zero hunger, Improving life on land, sustainable cities and communities and responsible consumption and production. ## What it does It helps give a platform to food facilities to distribute their untouched meals to the shelter via the plate saver app. It asks the restaurant to provide the number of meals that are available and could be picked up by the shelters. It also gives the flexibility to provide any kind of food restriction to respect cultural and health restrictions for food. ## How we built it * Jav ## Challenges we ran into There were many challenges that I and my teammates ran into were learning new skills, teamwork and brainstorming. ## Accomplishments that we're proud of Creating maps, working with ## What we learned We believe our app is needed not only in one region but entire world, we all are taking steps towards building a safe community for everyone Therefore we see our app's potential to run in collaboration with UN and together we fight world hunger.
## Inspiration Behind Plate-O 🍽️ The inspiration for Plate-O comes from the intersection of convenience, financial responsibility, and the joy of discovering new meals. We all love ordering takeout, but there’s often that nagging question: “Can I really afford to order out again?” For many, budgeting around food choices can be stressful and time-consuming, yet essential for maintaining a healthy balance between indulgence and financial well-being. 🍔💡 Our goal with Plate-O was to create a seamless solution that alleviates this burden while still giving users the excitement of variety and novelty in their meals. We wanted to bridge the gap between smart personal finance and the spontaneity of food discovery, making it easier for people to enjoy new restaurants without worrying about breaking the bank. 🍕✨ What makes Plate-O truly special is its ability to learn from your habits and preferences, ensuring each recommendation is not only financially responsible but tailored to your unique tastes. By combining AI, personal finance insights, and your love for good food, we created a tool that makes managing your takeout spending effortless, leaving you more time to enjoy the experience. Bon Appétit! 📊🍽️ ## How We Built Plate-O 🛠️ At the core of Plate-O is its AI-driven recommendation engine, designed to balance two crucial factors: your financial well-being and your culinary preferences. Here’s how we made it happen: Backend: We used FastAPI to build a robust system for handling the user’s financial data, preferences, and restaurant options. By integrating the Capital One API, Plate-O can analyze your income, expenses, and savings to calculate an ideal takeout budget—maximizing enjoyment while minimizing financial strain. 💵📈 **Frontend**: Next.js powers our intuitive user interface. Users input their budget, and with just a few clicks, they get a surprise restaurant pick that fits their financial and taste profile. Our seamless UI makes ordering takeout a breeze. 📱✨ **Data Handling & Preferences**: MongoDB Atlas is our choice for managing user preferences—storing restaurant ratings, past orders, dietary restrictions, and other critical data. This backend allows us to constantly learn from user feedback and improve recommendations with every interaction. 📊🍴 **AI & Recommendation System**: Using Tune’s LLM-powered API, we process natural language inputs and preferences to predict what food users will love based on past orders and restaurant descriptions. The system evaluates each restaurant using criteria like sustainability scores, delivery speed, cost, and novelty. 🎯🍽️ **Surprise Meal Feature**: The magic happens when the system orders a surprise meal for users within their financial constraints. Plate-O delights users by taking care of the decision-making and getting better with each order. 🎉🛍️ ## Challenges We Overcame at Plate-O 🚧 **-Budgeting Complexity**: One of our first hurdles was integrating the Capital One API in a meaningful way. We had to ensure that our budgeting model accounted for users’ income, expenses, and savings in real-time. This required significant computation beyond the API and iteration to create a seamless experience. 💰⚙️ **Recommendation Fine-Tuning:** Balancing taste preferences with financial responsibility wasn’t easy. Most consumer dining preference data is proprietary, forcing us to spend a lot of time refining the recommendation system to ensure it could accurately predict what users would enjoy with small amounts of data, leveraging open-source Large Language Models to improve results over time. 🤖🎯 **-Data Integration**: Gathering and analyzing user preference data in real-time presented technical challenges, particularly when optimizing the system to handle large restaurant datasets efficiently while providing quick recommendations. Combining two distinct datasets, the Yelp restaurant datalist and an Uber Eats csv also required a bit of Word2Vec ingenuity. 🗄️⚡ ## Accomplishments at Plate-O 🏆 **-Smart Budgeting with AI**: Successfully implemented a model that combines personal finance data with restaurant preferences, offering tailored recommendations that help users stay financially savvy while enjoying variety in their takeout. 📊🍕 **- Novel User Experience**: Plate-O’s surprise meal feature takes the stress out of decision-making, delighting users with thoughtful recommendations that evolve with their taste profile. The platform bridges convenience and personalized dining experiences like never before. 🚀🥘 ## Lessons Learned from Plate-O’s Journey 📚 **-Simplicity Wins**: At first, we aimed to include many complex features, but we quickly realized that simplicity and focus lead to a more streamlined and effective user experience. It’s better to do one thing exceptionally well—help users order takeout wisely. 🌟🍽️ **-The Power of Learning**: A key takeaway was understanding the importance of iterative learning in both our recommendation engine and product development process. Every user interaction provided valuable insights that made Plate-O better. 🔄💡 **-Balancing Functionality and Delight**: Creating a tool that is both functional and delightful requires finding a perfect balance between user needs and technical feasibility. With Plate-O, we learned to merge practicality with the joy of food discovery. 💼🎉 ## The Future of Plate-O 🌟 **-Groceries and Beyond**: We envision expanding Plate-O beyond takeout, integrating grocery shopping and other spending categories into the platform to help users make smarter financial choices across their food habits. 🛒📊 **-Real-Time AI Assistance**: In the future, we plan to leverage AI agents that proactively guide users through their food budgeting journey, offering suggestions and optimizations for both takeout and groceries. 🤖🍱 **-Social Good**: While we already take environmental protection into account when recommending restaurants, we’re excited to explore adding complete restaurant ESG scores to help users make socially responsible dining choices, supporting local businesses and environmentally friendly options. 🌍🍽️ With Plate-O, we're not just changing how you order takeout; we're helping you become a more financially savvy foodie, one delicious meal at a time.
## Inspiration Reducing North American food waste. ## What it does Food for All offers a platform granting the ability for food pantries and restaurants to connect. With a very intuitive interface, pantries and restaurants are able to register their organizations to request or offer food. Restaurants can estimate their leftover food, and instead of it going to waste, they are able to match with food pantries to make sure the food goes to a good cause. Depending on the quantity of food requested and available to offer as well as location, the restaurants are given a list of the pantries that best match their availability. ## How we built it Food for All is build using a full Node.js Stack. We used Express, BadCube, React, Shard and Axios to make the application possible. ## Challenges we ran into The main challenges of developing Food for All were learning new frameworks and languages. Antonio and Vishnu had very little experience with JavaScript and nonrelational databases, as well as Express. ## Accomplishments that we're proud of We are very proud of the implementation of the Google Maps API on the frontend and our ranking and matching algorithm for top shelters. ## What we learned We learned how to make Rest APIs with Express. We also realized a decent way through our project that our nonrelational local database, Badcube, worked best when the project was beginning, but as the project scaled it has no ability to deal with nuanced objects or complex nested relationships, making it difficult to write and read data from. ## What's next for Food for All In the future, we aim to work out the legal aspects to ensure the food is safely prepared and delivered to reduce the liability of the restaurants and shelters. We would also like to tweak certain aspects of the need determination algorithm used to find shelters that are at greatest need for food. Part of this involves more advanced statistical methods and a gradual transition from algorithmic to machine learning oriented methods.
partial
## Inspiration When I was 10, I played a text-adventure game related to simply traversing the world. It was small, there was no GUI, but even then, outside of Minecraft, it was one of the few games who captured my interest because of it's wide field and untapped ability for exploration. Recently I wanted to revisit that game, but I found that in the time a lot had changed. As such, we decided to re-explore that field ourselves using Generative AI to capture the non-linear storytelling, the exploration, and enjoy the books we read nowadays in a new refreshing light. ## What it does Multivac stands for Multimedia Visual Adventure Console, because it uses both text and images to turn any piece of text into an interactive adventure. Basically, you can upload any book, and turn it into an interactive fiction game. Multivac processes and chunks the uploaded story storing it in a vector database. From this, it creates a list of major states with chronological timestamps from the story that are pivotal points in the book that -- unless monumental work is done on your part -- will occur again. This allows Multivac to know what the main plot points of the story are, so that it has something to work off of. In addition, it helps the list of states helps to tie you back to the major plot points of your favorite books, allowing you to relive those memories from a new and fresh perspective. From this, Multivac uses relevant info from the story vector database, chat history vector database, state list and current timestamp to generate responses to the user that move the story along in a cohesive manner. Alongside these responses, Multivac generates images with Stable Diffusion that enhance the story being told -- allowing you to truly relive and feel immersed in the story **you're** writing. ## How we built it For frontend, we used React and Typescript. For backend, we used Flask for server management. We used Langchain for querying Claude and Anthropic and we used Llamaindex to store story data and chat history in a Vector database and to do vector searches through Llamaindex's query engine. We also used Replicate AI's API to generate images with Stable Diffusion. For persistent database systems, we decided to create our own individual SQL-like system so we could avoid the additional overhead that came with SQL and it's cousins. ## Challenges we ran into There were 2 major challenges we ran into. The first one was related to building Multivac's response pipeline. It was our first time using vector databases and Llama-Index as a whole to search for relevant details within large bodies of text. As such, as you might imagine, there were quite a few bugs and unforeseen break points. The second was related to the actual database system we were using. As one would imagine, there's a reason SQL is so popular. It wasn't until 3:00 A.M. at night, when we were creating and debugging our own operations for getting, writing, etc... from this database system did we truly understand SQL's beauty. ## Accomplishments that we're proud of We are proud of creating a persistent database from scratch, as well as being able to build all the features we set out to create. We are also proud of completing the end to end pipeline using Llama-Index and LangChain (which was surprisingly difficult). We are especially proud of our UI as -- if we don't say so -- it looks pretty slick! ## What we learned Couple of things. Firstly, if your going to use a database system or your application requires persistent storage, use SQL or a founded database in the field. Don't make your own. Just because you can doesn't mean you should. Secondly, we learned how to use Llama-Index to process, chunk and use text end-to-end in a vector DB for LLM calls. Thirdly, we learned how tiring 1-day 2-night hackathons can be and how taxing they are on the human spirit. Lastly, we learned how cool Harvard and MIT looks. ## What's next for Multivac As of writing, we currently use a list of states to track where the user is in the story, but we want to expand Multivac to instead be a self-generative state machine with stochastic transitions. This would make the world feel more **alive** and grounded in reality (since the world doesn't always go your way) giving you further immersion to explore new paths in your favorite stories. This would also create more control over the story, allowing for more initial customization on the user's end regarding what journey they want to take.
## Inspiration As a video game lover and someone that's been working with Gen AI and LLMs for a while, I really wanted to see what combining both in complex and creative ways could lead to. I truly believe that not too far in the future we'll be able to explore worlds in RPGs where the non-playable-characters feel immersively alive, and part of their world. Also I was sleep-deprived and wanted to hack something silly :3 ## What it does I leveraged generative AI (Large Language Models), as well as Vector Stores and Prompt Chaining to 'train' an NPC without having to touch the model itself. Everything is done in context, and through external memory using the Vector Store. Furthermore, a seperate model is concurrently analyzing the conversation as it goes to calculate conversation metrics (familiarity, aggresivity, trust, ...) to trigger events and new prompts dynamically! Sadly there is no public demo for it, because I didn't want to force anyone to create their own api key to use my product, and the results just wouldn't be the same on small hostable free tier llms. ## How we built it For the frontend, I wanted to challenge myself and not use any framework or library, so this was all done through good-old html and vanilla JS with some tailwind here and there. For the backend, I used the Python FastAPI framework to leverage async workflows and websockets for token streaming to the frontend. I use OpenAI models combined together using Langchain to create complex pipelines of prompts that work together to keep the conversation going and update its course dynamically depending on user input. Vector Stores serve as external memory for the LLM, which can query them through similarity search (or other algorithms) in real time to supplement its in-context conversation memory through two knowledge sources: 'global' knowledge, which can be made up of thousands of words or small text documents, sources that can be shared by NPCs inhabiting the same 'world'. These are things the NPC should know about the world around them, its history, its geography, etc. The other source is 'local' knowledge, which is mostly unique to the NPC: personal history, friends, daily life, hobbies, occupations, etc. The combination of both, accessible in real time, and easily enhanceable through other LLMs (more on this in 'what's next) leads us to a chatbot that's been essentially gaslit into a whole new virtual life! Furthermore, heuristically determined conversation 'metrics' are dynamically analyzed by a separate llm on the side, to trigger pre-determined events based on their evolution. Each NPC can have pre-set values for these metrics, along with their own metric-triggered events, which can lead to complex storylines and give way to cool applications (quest giving, ...) ## Challenges we ran into I wanted to do this project solo, so I ran out of time on a few features. The token streaming for the frontend was somehow impossible to make work correctly. It was my first time coding a 'raw' API like this, so that was also quite a challenge, but got easier once I got the hang of it. I could say a similar thing for the frontend, but I had so much fun coding it that I wouldn't even count it as a challenge! Working with LLM's is always quite a challenge, as trying to get correctly formatted outputs can be compared to asking a toddler ## Accomplishments that we're proud of I'm proud of the idea and the general concept and design, as well as all the features and complexities I noted down that I couldn't implement! I'm also proud to have dedicated so much effort to such a useless, purely-for-fun scatter-brained 3-hours-of-sleep project in a way that I really haven't done before. I guess that's the point of hackathons! Despite a few things not working, I'm proud to have architectured quite a complex program in very little time, by myself, starting from nothing but sleep-deprivation-fueled jotted-down notes on my phone. ## What we learned I learned a surprising amount of HTML, CSS and JS from this, elements of programming I always pushed away because I am a spoiled brat. I got to implement technologies I hadn't tried before as well, like Websockets and Vector Stores. As with every project, I learned about feature creep and properly organising my ideas in a way that something, anything can get done. I also learned that there is such a thing as too much caffeine, which I duely noted and will certainly regret tonight. ## What's next for GENPC There's a lot of features I wanted to work on but didn't have time, and also a lot of potential for future additions. One I mentioned earlier is too automatically extend global or local knowledge through a separate LLM: given keywords or short phrases, a ton of text can be added to complement the existing data and further fine-tune the NPC. There's also an 'improvement mode' I wanted to add, where you can directly write data into static memory through the chat mode. I also didn't have time to completely finish the vector store or conversation metric graph implementations, although at the time I'm writing this devpost I still have 2 more hours to grind >:) There's a ton of stuff that can arise from this project in the future: this could become a scalable web-app, where NPCs can be saved and serialized to be used elsewhere. Conversations could be linked to voice-generation and facial animation AIs to further boost the immersiveness. A ton of heuristic optimizations can be added around the metric and trigger systems, like triggers influencing different metrics. The prompt chaining itself could become much more complex, with added layers of validation and analysis. The NPCs could be linked to other agentic models and perform complex actions in simulated worlds!
## Inspiration Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on. ## What it does StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with! ## How we built it We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort. ## Challenges we ran into The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle. ## Accomplishments that we're proud of That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features. ## What we learned A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product. ## What's next for StickyAR StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
losing
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library. ## What it does A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability . ## How we built it We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library. ## Challenges we ran into We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack. ## Accomplishments that we are proud of We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off! ## What we learned Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools. ## What's next for desk.lib If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier.
## Inspiration Given that students are struggling to make friends online, we came up with an idea to make this easier. Our web application combines the experience of a video conferencing app with a social media platform. ## What it does Our web app targets primarily students attending college lectures. We wanted to have an application that would allow users to enter their interests/hobbies/classes they are taking. Based on this information, other students would be able to search for people with similar interests and potentially reach out to them. ## How we built it One of the main tools we were using was WebRTC that facilitates video and audio transmission among computers. We also used Google Cloud and Firebase for hosting the application and implementing user authentication. We used HTML/CSS/Javascript for building the front end. ## Challenges we ran into Both of us were super new Google Cloud + Firebase and backend in general. The setup of both platforms took a significant amount of time. Also, we had some troubles with version control on Github. ## Accomplishments we are proud of GETTING STARTED WITH BACKEND is a huge accomplishment! ## What we learned Google Cloud, FIrebase, WebRTC - we got introduced to all of these tools during the hackathon. ## What’s next for Studypad We will definitely continue working on this project and implement other features we were thinking about!
winning
## What it does From Here to There (FHTT) is an app that shows users their options for travel and allows them to make informed decisions about their form of transportation. The app shows statistics about different methods of transportation, including calories burned, CO2 emitted and estimated gas prices. See a route you like? Tap to open the route details in Google Maps. ## How we built it Everything is written in Java using Android Studio. We are using the Google Maps Directions API to get most of our data. Other integrations include JSON simple and Firebase analytics. ## Challenges we ran into We wanted to find ways to more positively influence users and have the app be more useful. The time pressure was both motivating and challenging. ## Accomplishments that we're proud of Interaction with the Google Maps Directions API. The card based UI. ## What we learned From this project, we have gained more experience working with JSON and AsyncTasks. We know more about the merits and limitations of various Google APIs and have a bit more practice implementing Material Design guidelines. We also used git with Android Studio for the first time. ## What's next for From Here to There Integrating a API for fuel cost updates. Improving the accuracy of calorie/gas/CO2 estimates by setting up Firebase Authentication and Realtime Database to collect user data such as height, weight and car type. Using a user system to show "lifetime" impact of trips. A compiled apk has been uploaded to the Github, try it out and let us know what you think!
## Inspiration Many of us tech enthusiasts have always been interested in owning an electric vehicle, however in my case like in India you can go 100s of miles without seeing a single charging point once you leave the big cities. In my case my dad was not ready to buy an electric car as its not even possible to travel from my home which is Bangalore to Manipal(Location of my University) as there aren't enough charging points on the way to make it. This gave me the idea for this project so that everyone in the future will be able to own an electric vehicle. ## What it does Our application Elyryde allows owners of electric vehicles to find charging points on the way to their destination by using the map on the application. It also allows owners of electric vehicle charging points to list their charging point to generate revenue when the charging point is not being used. This will also enable people to perform long distance road journeys as they would no longer have to worry about running out of charge on the way to their destination. This app provides a push towards green and sustainable energy by making electric cars more accessible to people around the world. ## How we built it Our application is built with Java for Android. We designed the application with authentication by email and password through firebase. We also set up a firebase realtime database to store longitude and latitude in order to plot them on the map. We also made use of Google maps api in order to plot the locations on the map and allow users to navigate to their nearest charging station. ## Challenges we ran into The main challenge we faced was with the firebase integration since we were quite new to firebase. We also fell short of time to build the profile page where the user could list his charging points. ## Accomplishments that we're proud of We were able to contribute towards making a push for green sustainable environment where anyone can own an electric vehicle without any problems. We hope that people in developing countries will soon be able to own electric vehicles without any problems. ## What we learned Over the course of the hackathon we learnt about firebase integration with java. We also learnt about how much of a difference in carbon emissions can brought upon by making a switch away from fossil fuels and other polluting substances. Electric vehicles will help bring down pollution in many parts of the world. ## What's next for Elyryde The app offers a beautiful solution towards the growing disparity in developing countries and electric vehicles. Machine Learning and Statical Analysis on the data collected has the potential in aiding corporate investors to Set-Up Charging Stations in strategic locations. It can allow high end investment as well as increase public utility. Something which our app also lacks is regulations, Creating a self-sustaining community of Electric Vehicle Users with some guidelines will further aid in democratizing the app. Finally, the app works towards sustainable development and hopefully more green-utility is can be added to it.
## Inspiration As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them! ## What it does Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now! ## How we built it We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display. ## Challenges we ran into Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours! ## Accomplishments that we're proud of We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display. ## What we learned As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing! ## What's next for FixIt An Issue’s Perspective \* Progress bar, fancier rating system \* Crowdfunding A Finder’s Perspective \* Filter Issues, badges/incentive system A Fixer’s Perspective \* Filter Issues off scores, Trending Issues
losing
## Inspiration Determined to create a project that was able to make impactful change, we sat and discussed together as a group our own lived experiences, thoughts, and opinions. We quickly realized the way that the lack of thorough sexual education in our adolescence greatly impacted each of us as we made the transition to university. Furthermore, we began to really see how this kind of information wasn't readily available to female-identifying individuals (and others who would benefit from this information) in an accessible and digestible manner. We chose to name our idea 'Illuminate' as we are bringing light to a very important topic that has been in the dark for so long. ## What it does This application is a safe space for women (and others who would benefit from this information) to learn more about themselves and their health regarding their sexuality and relationships. It covers everything from menstruation to contraceptives to consent. The app also includes a space for women to ask questions, find which products are best for them and their lifestyles, and a way to find their local sexual health clinics. Not only does this application shed light on a taboo subject but empowers individuals to make smart decisions regarding their bodies. ## How we built it Illuminate was built using Flutter as our mobile framework in order to be able to support iOS and Android. We learned the fundamentals of the dart language to fully take advantage of Flutter's fast development and created a functioning prototype of our application. ## Challenges we ran into As individuals who have never used either Flutter or Android Studio, the learning curve was quite steep. We were unable to even create anything for a long time as we struggled quite a bit with the basics. However, with lots of time, research, and learning, we quickly built up our skills and were able to carry out the rest of our project. ## Accomplishments that we're proud of In all honesty, we are so proud of ourselves for being able to learn as much as we did about Flutter in the time that we had. We really came together as a team and created something we are all genuinely really proud of. This will definitely be the first of many stepping stones in what Illuminate will do! ## What we learned Despite this being our first time, by the end of all of this we learned how to successfully use Android Studio, Flutter, and how to create a mobile application! ## What's next for Illuminate In the future, we hope to add an interactive map component that will be able to show users where their local sexual health clinics are using a GPS system.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration Each living in a relatively suburban area, we are often quite confused when walking through larger cities. We can each associate with the frustration of not being able to find what seems to be even the simplest of things: a restroom nearby or a parking space we have been driving around endlessly to find. Unfortunately, we can also associate with the fear of danger present in many of these same cities. IntelliCity was designed to accommodate each one of these situations by providing users with a flexible, real-time app that reacts to the city around them. ## What it does IntelliCity works by leveraging the power of crowdsourcing. Whenever users spot an object, event or place that fits into one of several categories, they can report it through a single button in our app. This is then relayed through our servers and other users on our app can view this report along with any associated images or descriptions, conveniently placed as a marker on a map. ## How we built it ![technologies](https://i.imgur.com/T612d0C.png "technologies we used") IntelliCity was built using a variety of different frameworks and tools. Our front-end was designed using Flutter and the Google Maps API, which provided us with an efficient way to get geolocation data and place markers. Our backend was made using Flask and Google-Cloud. ## Challenges we ran into Although we are quite happy with our final result, there were definitely a few hurdles we faced along the way. One of the most significant of these was properly optimizing our app for mobile devices, for which we were using Flutter, a relatively new framework for many of us. A significant challenge related to this was placing custom, location-dependent markers for individual reports. Another challenge we faced was transmitting the real-time data throughout our setup and having it finally appear on individual user accounts. Finally, a last challenge we faced was actually sending text messages to users when potential risks were identified in their area. ## Accomplishments that we're proud of We are proud of getting a functional app for both mobile and web. ## What we learned We learned a significant amount throughout this hackathon, about everything from using specific frameworks and APIS such as Flutter, Google-Maps, Flask and Twilio to communication and problem-solving skills. ## What's next for IntelliCity In the future, we would like to add support for detailed analysis of specific cities.
winning
README.md exists but content is empty.
Downloads last month
43