simon

simon

github

Hackathon- Lift Bot

Hackathon- Lift Bot#

Created: October 24, 2021 8:29 PM
Tags: Raspberry Pi

Pain Point Description#

After the company moved, the office space and surrounding environment have been greatly improved, but the only thing that has regressed is the elevator in the building. The elevator scheduling algorithm is terrible, and it takes 5-10 minutes to wait for the elevator each time. Each person can spend nearly 20-30 minutes waiting for the elevator every day. The company has about 500 employees, 30 * 500/60 = 250 hours, 5500 hours per month, which is equivalent to wasting 550 days of working hours every month. At the same time, when visitors arrive, it also leaves a bad impression. I often think, it would be great if we could save this time.

How to Solve#

Find a person to stand at the elevator entrance. When someone wants to take the elevator, they press the button and notify the person who wants to take the elevator when the elevator arrives.

By using this method, we can still save 5280 hours per month. However, isn't this a bit too silly?

So, we use a computer to do these things for us.

Send a request to the Raspberry Pi through a mobile phone, trigger the servo to press the elevator button, start the vision kit, use deep learning for computer vision to recognize the floor information on the elevator LED display, and send a notification to take the elevator when the elevator is about to arrive.

Hardware Equipment#

Host#

Raspberry Pi Zero W * 2

Untitled

Vision Recognition#

Google Vision Kit * 1

Tensorflow deep learning board, camera, button, LED, buzzer

Untitled 1

Untitled 2

Mechanical Drive#

WS-SG900 servo * 1

Untitled 3

Servo drive board * 1

Untitled 4

Network Part#

Two Raspberry Pis each started a flask web service for receiving and sending commands.

Raspberry Pi with Servo#

There is an interface for receiving button commands, which is connected to a servo drive board to receive servo execution commands and drive the servo.

Raspberry Pi with Computer Vision#

It provides an interface for receiving computer vision recognition commands, calls Google's AIY vision board to recognize the elevator's dashboard, and when it recognizes that the elevator is approaching the 12th floor up or down within 3 positions, it calls the Feishu API to send a message to the corresponding person.

Model Part#

Used Google TensorFlow MobileNet recognition model for supervised learning classification. Recognize different floors and going up or down.

Data Collection#

Collected over 200 images of going up and down different floors. Removed hidden files on Mac.

Labeling#

Divided into 30 different labels, up to 1, down to 1. Each floor is divided into these two categories of images.

Defects#

Due to time constraints, the model parameters have not been well tuned, and the recognition rate is less than 80%.

Results#

Won the first place in the company's 2021 Hackathon and received a prize of 10,000 yuan.

Todo

Post a video of the production process and the effect.

Execution Table

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.