Lobe visualization tools build, train and install personalized deep learning models

Did you ever think that someday, an app could recognize your face and generate it into an existing Emoji. Or when you open the camera to a plant, it can instantly show the plant's name. Even when you're not at home, it can help monitor your baby. If the baby wakes up, it will notify you immediately... These smart behaviors are now available on a tool called lobe. This visualization tool can Create, train, and install personalized deep learning models, and developers don't write any code.

Lobe was built by a startup company in San Francisco, USA. The inventor believes that establishing a deep learning model is a very slow and complicated process. The hardest part is finding a starting point. There are many languages ​​to learn, even when you When you are ready, it is difficult to imagine and understand what you really want to do. This is why they created Lobe. It allows people from different backgrounds and professions to invent and create with deep learning.

Just drag the folder and it will learn on its own

Lobe is an easy-to-use visualization tool. You can create a custom deep learning model, quickly train it, and deploy it directly to the app without writing any code.

The whole process is divided into three parts. The first step is to build the model. First drag a training sample folder from the desktop and Lobe will automatically create a deep learning model and start training. Then it adjusts the settings and connects the pre-training sections to fine-tune the model.

The second step, training. As the model improves, you can monitor training progress in real time using interactive charts and test results. Training in the cloud can quickly produce results without slowing down the computer.

The third step is to deploy. After the training is complete, you can export the trained model to CoreML or TensorFlow and install it in an iOS or Android app. Or use the easy and convenient Lobe developer API to run the model directly from the cloud remotely.

Connect the lobes together

Connect intelligent building blocks lobes to quickly create custom deep learning models. For example, connecting the Hand & Face lobe can find the most prominent hand in an image. Then connect the Detect Feature lobe to find important features in your hand. Finally, connect Generate Labels lobe to predict what emoji corresponds to the graph. Optimize your model by adjusting each lobe's unique settings, or drill down internally to edit a lobe's sub-layer.

Dataset visualization

In the lobe tool, the data of the model training can be presented in a visual way, providing the user with browsing and dragging all sorts of data. Click on the icon to see how this example works in the model. Your data set can also be automatically divided into two parts, Lesson and Test. Lesson teaches how the model can be trained. Test is used to evaluate how the model behaves in real-world examples it has never seen before.

View results in real time

Because the speed of training in the cloud is very fast, you can always get the results of the model performance, the speed of the computer will not slow down. You can control the accuracy of the model through interactive charts and understand how it improves over time. In the end it automatically selects the best accuracy, so you don't have to worry about overfitting.

Advanced control of each layer

Although lobe provides high-level operation tools, the bottom layer is based on the deep learning framework TensorFlow and Keras, so the lobe can give you an in-depth understanding of each layer of the model. Adjust parameters, add layers, design new architectures with hundreds of advanced lobes blocks. All these operations are synchronized when the user drags the interface.

Add to your application

After the model completes training, it can be exported to TensorFlow or CoreML and run directly in your app. Or through the Lobe Developer API, your model can be transferred to the cloud server and then integrated with the app in your chosen language. Because Lobe is built on industry standards, your model's performance and compatibility are top-notch (original code for each language: https://lobe.ai).

Explore the world with Lobe

With this tool, people can do a variety of difficult tasks, including drawing architectural structures, controlling drones, generating petal images, controlling music visualization, recognizing plants, adjusting musical instruments, detecting cancer, reading lip gloss, and more. The following are some of the wonderful tasks that people have made using lobe.

Positioning of hands and face

The purpose of this task is to give a portrait photo. The model must learn to draw a bounding box on the face and hand of the character. This tool is just one of the other Lobe models (such as Emoji Hands and Face).

California Plant

As the name suggests, let the model look at the plant in the picture and identify what it is. The purpose of creating it is to remind people which plants are toxic.

Rose Petal Generator

By observing the photos of thousands of rose petals, the model can learn to generate new, realistic rose petals. This was invented by the artist Sarah Meyohas. Her aim was to study the relationship between technology and aesthetics.

House structure perspective

By viewing 3D models of different style houses and measurement data and metadata, the model learns to judge the structural style of the house. This model was created by the architect Kyle Steinfeld to help architects use CAD software better at design time.

Hot dog recognizer

Created by Jian Yang, it helps people identify different kinds of foods, that is, whether they are hot dogs in the picture.

Water meter monitoring

The model looks at a picture with a water tank and learns to find something like a scale to calculate the amount of water in the tank for detecting and tracking household water conditions around the clock.

Tuner

When the model hears the instrument being played, it can tell which instrument it is. With this tool, you can let the tuning software automatically select the correct instrument, different people manually change settings.

The weight of coffee

As long as the model sees a picture of a coffee machine, it can learn how many ounces of coffee it contains. This tool can help brewing a perfect cup of coffee without using scales.

Drone automatic flight

Invented by Alessandro Giusti, it allows drones to fly automatically along rugged mountain roads. By learning video footage recorded by drones, they can learn about the direction of the mountain path and help with search and rescue missions.

Music visualization

Let the model listen to a song and learn to control the visual representation based on its unique musical characteristics. This tool was invented by Symmetry Labs and can automatically control VJ software.

Emoji Face

The model can be converted into the corresponding Emoji by the expression of the person in the picture. It can be seen from the experimental results in the above figure that the accuracy is still quite high.

Emoji Hand

Like the previous tool, it can convert the gestures made by people in the picture to Emoji, and later it will not be found in the Emoji library.

Emoji Drawing

Convert the graffiti to Emoji, and in the future, as long as you draw two pens, you can choose the desired expression.

Skin cancer detection

Learn to differentiate between areas of skin cancer and common caries to help doctors diagnose.

Draw a quadrilateral

Seeing that the person in the picture extends two fingers, the model will automatically draw a rectangular border based on the finger. This tool allows people to use AR equipment in the future to estimate or adjust the size of an object.

Hand angle

Also a geometric problem, the model can measure the angle of the human hand in the graph.

Lip language reading

Let the model watch the silent video, learn what people say in the painting, and learn to understand what is said. This tool can be used to make calls in quiet public places without affecting others.

Baby monitor

By observing the photos taken with the camera fixed in the crib, learning to judge whether the baby is awake or asleep, to inform the parents of the baby's condition.

Conclusion

According to team members, they spent two years to build this product, which was welcomed by many people once it was released. There have been a lot of tools supported by lobe, covering all areas of life. We also expect that lobe can be applied to more software and explore more possibilities for deep learning models.

Landfill Gas Generator

Landfill Gas Generator,Natural Gas Turbine Generator,Small Natural Gas Generator,Natural Gas Home Generator

Jiangsu Vantek Power Machinery Co., Ltd , https://www.vantekpower.com

Posted on