ILLA Cloud x Hugging Face: Unleash the possibility Of AI Model with Low-Code Tool
ILLA proudly announces a partnership with Hugging Face, a suite of natural language processing (NLP) tools and services.

ILLA proudly announces a partnership with Hugging Face, a suite of natural language processing (NLP) tools and services. They are most well-known for their open-source NLP library, which provides text generation, language translation, and named entity recognition tools. With Hugging Face, ILLA is more productive than before. Our users can do more with AI.
ILLA Cloud

At ILLA Cloud, we are dedicated to providing businesses with reliable, efficient, and cost-effective cloud solutions. Our goal is to help businesses leverage the power of the cloud to achieve their objectives and enhance their operations. We believe that the cloud represents the future of computing, and we are committed to helping businesses of all sizes benefit from this technology.
Our platform offers a range of cloud solutions, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). We provide an open-source low-code platform for building internal tools that connect to databases and APIs. Our platform supports multiple data sources, including Redis, PostgreSQL, MongoDB, MySQL, Stripe, Google Sheets, AWS S3, and more. With our drag-and-drop interface and collaborative features, you can build applications quickly and efficiently.
We understand that businesses need to communicate with their customers and users effectively, which is why we offer an easy way to integrate ChatGPT into your applications. This enables businesses to streamline their workflow and communicate with their users and customers more efficiently. Our platform also offers automation, monitoring, and analytics, which can help businesses to streamline their operations and improve their performance.
At ILLA Cloud, we pride ourselves on our commitment to customer satisfaction. We work closely with our clients to understand their needs and provide tailored solutions that meet their specific requirements. Our team of experts has extensive experience in cloud computing and can provide businesses with the guidance and support they need to succeed in the cloud.
Hugging Face

Hugging Face is the Github of the machine learning community, with hundreds of thousands of pre-trained models and 10,000 datasets currently available. You can freely access models and datasets shared by other industry experts or host and deploy your models on Hugging Face.
Some examples of models available in the Hugging Face library include:
- BERT (Bidirectional Encoder Representations from Transformers): BERT is a transformer-based model developed by Google for various NLP tasks. It has achieved state-of-the-art results in language understanding and machine translation tasks.
- GPT (Generative Pre-training Transformer): GPT is another transformer-based model developed by OpenAI. It is primarily used for language generation tasks, such as translation and text summarization.
- RoBERTa (Robustly Optimized BERT): RoBERTa is an extension of the BERT model that was developed to improve BERT's performance on various NLP tasks.
- XLNet (eXtraordinary LanguageNet): XLNet is a transformer-based model developed by Google that is designed to improve the performance of transformer models on language understanding tasks.
- ALBERT (A Lite BERT): ALBERT is a version of the BERT model that was developed to be more efficient and faster to train while maintaining good performance on NLP tasks.
What you can do with Hugging Face in ILLA Cloud?
In Hugging Face, over 130,000 machine-learning models are available through the public API, which you can use and test for free at huggingface.co/models. In addition, if you need a solution for production scenarios, you can use Hugging Face's Inference Endpoints, which can be deployed and accessed at huggingface.co/docs/inference-endpoints/index.
ILLA Cloud provides dozens of commonly used front-end components, allowing you to build different front-end interfaces based on your specific needs quickly. At the same time, ILLA offers a connection to Hugging Face, allowing you to quickly connect to the API, send requests, and receive returned data. By connecting the API and front-end components, you can implement the requirement that users can enter content through the front end and submit it to the API. The API returns the generated content to be displayed on the front end.
How to use Hugging Face in ILLA Cloud?
Step-by-step tutorial for how to use Hugging Face in ILLA Cloud.
Step 1: Build UI with components on ILLA Cloud
Based on the expected usage scenario you described, build a front-end interface. For example, you could use input and image components if your product takes in text and outputs an image. If your product takes in text and outputs generated text, you could use input and text components.
The following image is an example of a front-end page for a product that answers questions based on context.

Step 2: Create a Hugging Face Resource and config the Actions
Click + New
in the action list and select Hugging Face Inference API.

Fill in the form to connect to your Hugging Face:
Name: The name that will be displayed in ILLA
Token: Get in your Hugging Face profile settings

Confirm the model information in Hugging Face before configuring the actions:
Select a model from Hugging Face Model Page
Let’s use deepset/roberta-base-squad2
as an example. Enter the detail page > click Deploy > Click Inference API

The parameters after “inputs”
are what you should fill in ILLA.

In ILLA Cloud, we should fill in the Model ID and Parameter. Taking the above model as an example, the “inputs”
is a key-value pair, so we can fill in it with key-value or JSON.


And we also support the text input and binary input which can meet all Hugging Face Inference API connections.
Step 3: Connect actions to components
To pass the user's front-end input to the API, you can use {{ to retrieve data inputted in the component. For example, input2 is the component to input the question and input1 is the component to input context, we can fill question
and context
in key, and fill {{ input.value }}
in value:
COPY
{
"question": {{input2.value}},
"context": {{input1.value}}
}
Before displaying the output data of the Action in the front-end component, we should confirm which field the output of different models is placed in. Still taking deepset/roberta-base-squad2
as an example, the results are as follows:

So we can get the answer with {{
textQuestion.data
[0].answer }}
(the textQuestion
is the name of the action).

Demo


Usecase 2:
Here we will demonstrate how to use Stable diffusion v1.5 and Stable diffusion v2.1 in ILLA Cloud through Huggingface.
Step 1: Building a Front-end Interface
We construct a front-end interface by utilizing a drag-and-drop approach to place essential components such as input fields, buttons, images, and more. After adjusting the styles of the components, we obtain the following complete webpage.
Step 2: Creating Resources and Actions
We establish resources and actions by utilizing the Hugging Face Inference API to connect to the Stable Diffusion model. Two models can be utilized: runwayml/stable-diffusion-v1-5
and stabilityai/stable-diffusion-2-1
.
Choose the "Hugging Face Inference API" for this purpose.
Provide a name for this resource and enter your token from the Hugging Face platform.
In the Action configuration panel, please enter the Model ID and Parameter. We will retrieve the selected model from radioGroup1, so fill in the Model ID as {{radioGroup1.value}}
. For the input, since it is obtained from the input field, fill in the parameter as {{input1.value}}
. The configuration should be as shown in the following image.
We attempt to input "A mecha robot in a favela in expressionist style" in the input
component and run the Action. The resulting execution is as follows. From the left panel, you can observe the available data that can be called, including base64binary
and dataURI
.
Step 3: Displaying Data on Components
To display the image obtained from Step 2, we modify the Image source of the image component to {{generateInput.fileData.dataURI}}
. This will enable us to show the generated image.
Step 4: Running the Action with Components
To run the action created in Step 2 when the button component is clicked, add an event handler to the button component.
Step 5: Testing
Following the previous four steps, you can utilize additional components and data sources to complete other tasks and build a more comprehensive tool. For example, you can use other models to assist in generating prompts or store prompts in localStorage or a database. Let's take a look at the complete outcome when all the steps are implemented.
Conclusion
In conclusion, the partnership between ILLA and Hugging Face is set to revolutionize the world of low-code development. With the ability to access over 130,000 pre-trained machine-learning models, and the ease of use provided by ILLA Cloud, developers can now build highly sophisticated applications with incredible speed and efficiency. This partnership offers a complete solution for NLP and AI, making it possible for users to do more with AI than ever before.
Try ILLA Cloud For Free: cloud.illacloud.com
GitHub page: github.com/illacloud/illa-builder
Join Discord community: discord.com/invite/illacloud