Illusion Diffusion

Illusion Diffusion Model

Illusion Diffusion is a model built on the foundation of SD 1.5, designed to create illusionary effects within a given image. To use this model, you need to provide an input image or pattern, along with a prompt. The model will then generate a new image where the original image or pattern is concealed within the resulting artwork.

For more details on the Illusion Diffusion model, please refer to the Illusion Diffusion section.

Illusion Diffusion API

POST /getImagetoImage

Headers

NameValue

Content-Type

application/json

Authorization

Bearer <token>

Body

NameTypeDescription

app_id

string

-> Each model is uniquely characterized by its own app_id.

image

string

-> The image parameter specifies the URL of an existing image that will be used as a reference for the generation process.

-> If the original image dimensions exceed 1536x1536 pixels, the image will be adjusted to fit within this size while preserving the original aspect ratio.

prompt

string

-> The prompt parameter is the textual input that guides the image generation process. This prompt serves as an artistic compass, shaping the visual output.

guidance_scale

float

-> The guidance_scale parameter determines how closely the generated image adheres to the provided prompt. Higher values result in the model following the prompt more closely, while lower values allow for more creative deviation.

-> The valid range for the guidance_scale parameter is between 1 and 30.

batch

int

-> The batch parameter allows you to specify the number of images to generate at once.

-> The valid range for this parameter is between 1 and 8.

strength

float

-> The strength parameter specifies the degree of transformation applied to the reference image.

-> A higher strength value (up to 1) results in the generated image closely following the initial reference image.

negative_prompt

string

-> The negative_prompt parameter allows you to specify content that you want the image generation model to avoid or minimize in the output. This can be useful for excluding certain visual elements or styles that you do not want to be present in the generated image.

num_inference_steps

int

-> The num_inference_steps parameter represents the number of denoising iterations to perform during the image generation process. Generally, more iterations can result in higher-quality images, but they also increase the time required for generation.

-> The valid range for the num_inference_steps parameter is between 1 and 50.

celery

bool

-> The celery parameter is used for queuing tasks that require extended processing time. When you enqueue a task, you receive a unique task_id. This task_id allows you to check the task's status later using the task status API, which is useful for managing and tracking long-running tasks.

inference_type

string

-> The inference_type parameter allows you to specify the GPU to be used for the image generation task. The supported values are:

  • a10g

  • a100

  • h100

The different GPU options provide varying levels of performance and capabilities, allowing you to choose the most suitable GPU based on your requirements and the demand for the task.

Response

{
  "time_required": "",
  "error": "",
  "error_data": "",
  "input": "",
  "output": "",
  "app_id": "",
  "task_id": "",
  "status": ""
}

Run the API

To test this API, please use the following link:

Last updated