Skip to main content
MeiGen lets you generate images and videos using multiple AI models from a unified interface. Open the generation sidebar from the floating dock (desktop) or the navigation menu.

Model Selector

At the top of the generation sidebar, the model selector dropdown lets you choose which AI model to use. Each option shows:
  • Provider icon — visual indicator of the model provider (Google, ByteDance, Midjourney, etc.)
  • Model name — the display name of the model
  • Credit cost — how many credits this model uses per generation
Selecting a different model automatically updates the available aspect ratios, resolution options, and advanced settings to match that model’s capabilities.

Model Comparison

Compare all 8 models — features, pricing, strengths, and limitations

Prompt Editor

Writing Effective Prompts

  • Be specific — describe composition, lighting, colors, textures, and perspective for the best results
  • Short prompts work too — use the Enhance button to expand brief descriptions into detailed prompts automatically
  • Language — most models accept English prompts. Niji 7 will auto-translate non-English prompts to English

Variable Tags

Some prompts from the gallery contain blue [placeholder] tags — for example, [style], [subject], or [color palette]. These are editable template variables. Click any blue tag to select it, then type your replacement text. This makes it easy to create variations of a prompt without rewriting the whole thing.

@ Image Mentions

When you have multiple reference images uploaded, you can mention specific images inline in your prompt:
  1. Type @ in the prompt editor
  2. A popup appears showing thumbnails of your uploaded reference images
  3. Select an image to insert a mention tag (e.g., @image1, @image2)
  4. Write instructions around the mentions: “Use @image1 as the background scenery and apply the color palette from @image2”
This gives you precise control over how each reference image influences the generation.

# Color Picker

When you need to specify an exact color for an object in your prompt, pick one from the palette:
  1. Type # in the prompt editor
  2. A palette appears with common colors and curated hues grouped by shade
  3. Click any swatch to insert a color tag (e.g., #dc143c)
  4. Write instructions around the color: “A t-shirt in #dc143c paired with #f5f5dc pants”
Colors are submitted to the model as hex values — more precise than describing “deep red” or “beige” in words.

Translate Button

Click the translate icon (top right of the prompt editor) to translate your prompt into English. This is useful when writing prompts in another language, as most models produce the best results with English input.

Prompt Enhancement

Click the Enhance Prompt button to automatically enhance your prompt based on the selected model:
  • Polish mode (non-Niji 7) — preserves intent, adds visual details like composition, lighting, and materials
  • Expand mode (Niji 7) — expands into Midjourney-optimized language with anime/illustration cues; non-English prompts are auto-translated
a cat sitting on a windowsill
Enhancement works best on brief prompts (under 30 words). Skip it for prompts that already have detailed visual descriptions — the AI may over-process them.

Keyboard Shortcuts

ShortcutAction
Cmd+Enter / Ctrl+EnterTrigger generation

Reference Images

Upload reference images to guide the AI’s output. The AI uses these as visual context — for style matching, composition guidance, or subject reference.
  • Upload methods — drag-and-drop files onto the reference area, or click to browse
  • Supported formats — JPG, PNG, WebP
  • Auto-compression — images are automatically compressed to max 2MB / 2048px before upload
  • Thumbnail grid — uploaded images appear as thumbnails with an × button to remove each one
  • Per-model limits — the maximum number of reference images varies by model (0–5). Models that don’t support references (like Z Image Turbo) hide this section entirely.
See the Models page for the exact reference image limit of each model.
You don’t need to download images first. Drag any gallery card directly into the reference image area — the image URL is used automatically.
Combine gallery references with @ mentions for powerful multi-reference workflows. For example, drag two gallery cards as references, then write: “Combine the composition of @image1 with the color palette of @image2.”

Reference Types (Niji 7 Only)

When using Midjourney Niji 7 with a reference image, a dropdown appears to choose how the reference is interpreted:
TypeDescription
Content ReferenceThe AI uses the image as subject matter — it tries to recreate similar content. Control influence with the Image Weight (iw) parameter (0–2).
Style ReferenceThe AI extracts the visual style (colors, mood, technique) without copying the content. Control with Style Weight (sw, 0–1000).

When to Use Reference Images

Some prompts work best — or only make sense — with a reference image attached. MeiGen automatically detects keywords like “reference image” or “uploaded image” in your prompt and shows a hint if no reference is uploaded. Common patterns that need reference images:
Prompt PatternWhy Reference Needed
”Transform this reference image into…”The AI needs a source image to transform
”Based on the uploaded image, create…”Explicitly references an uploaded image
”Keep the composition of @image1 but change…”Uses @ mention syntax to reference uploads
”Redraw / reimagine the reference photo in…”Requires a source to redraw from
If your prompt describes transforming, redrawing, or referencing a specific image, always upload that image as a reference first. Without it, the AI will generate something unrelated to your intent.

Generation Options

Count

Choose how many images to generate per request (1–4).
  • Free users — limited to 1 image per request. A lock icon appears on the + button.
  • Paid users — can generate up to 4 images in parallel per request.

Aspect Ratio

The default is Auto — no need to pick manually, a suitable ratio is chosen for you. Drop into the dropdown to pick a specific ratio when you want precise control.
RatioOrientationCommon Use
Auto (default)Hands-off
1:1SquareSocial media profile, icons
3:4, 4:5, 2:3PortraitPhone wallpaper, posters
9:16Tall portraitStories, Reels
1:4, 1:8Ultra tallScrolling banners (Nanobanana 2 only)
4:3, 3:2LandscapeDesktop wallpaper
16:9, 21:9WideCinematic, ultrawide displays
4:1, 8:1Ultra widePanoramic banners (Nanobanana 2 only)
Not all models support all ratios. The selector automatically filters to show only ratios supported by the current model. See the Models page for each model’s supported ratios.

Resolution

Choose the output resolution when available:
ResolutionAvailability
2KAvailable to all users (default)
3KSelect models only (e.g., Seedream 5.0 Lite)
4KPaid users only — a lock icon appears for free users

Niji 7 Advanced Options

When using Midjourney Niji 7, the Advanced Options button lets you tune extra generation parameters. Most users can leave everything at defaults.
ParameterRangeDefaultDescription
Stylize0–1000100Controls how strongly Midjourney’s aesthetic is applied. Lower = more literal, higher = more artistic.
Chaos0–1000Adds variation between runs. Higher values produce more unexpected results.
Weird0–30000Introduces unusual, experimental qualities.
Raw Modeon/offoffWhen off, Niji 7’s anime style optimization is fully applied; when on, produces images with a raw, less processed aesthetic. Recommended to keep off for best anime results.
Image Weight (iw)0–21How much influence the content reference image has. Only visible when a content reference is set.
Style Reference (sref)URL/textProvide a URL to an image whose style you want to mimic. Disabled when reference images are set to Style mode.
Style Weight (sw)0–1000100How strongly the style reference is applied. Only visible when sref or style reference images are present.
Style Version (sv)1–44Style reference version. Default 4.

Describe Image

See an image you like but don’t know how to describe it? Describe Image uses AI vision to analyze an image and generate a detailed text prompt that captures its content, style, and composition.

How to Use

  1. Image detail dialog — open any image’s detail view and click the Describe Image button. The AI analyzes the image and generates a descriptive prompt.
  2. Drag and drop — drag any external image (or a gallery card) onto the Describe Image dropzone above the prompt editor in the generation sidebar.

What It Generates

The AI examines the image and produces a prompt describing:
  • Subject matter and scene composition
  • Art style and technique (photorealistic, illustration, watercolor, etc.)
  • Lighting, color palette, and mood
  • Camera angle and perspective
You can use the generated prompt as-is, or edit it to create variations of the original image.
Describe Image works well as a starting point. Combine it with Prompt Enhancement — first describe an image you like, then enhance the result for even more detail.

Video Generation

MeiGen offers two video models: Seedance 2.0 (per-second pricing, 4–15 second duration) and Veo 3.1 (fixed 8 seconds, with audio and style presets). Open the video sidebar from the Generate Video button in the floating dock.

Frame Images

Upload up to 2 frame images to control the video’s composition:
  • First Frame — sets the starting composition of the video
  • Last Frame — sets the ending composition
Both are optional. You can also drag gallery cards directly as frame images.

Seedance 2.0

OptionValuesDescription
Aspect ratioadaptive, 16:9, 4:3, 1:1, 3:4, 9:16, 21:97 ratios. adaptive matches the reference image/video dimensions
Resolution480p, 720p (default 480p)480p saves credits
Duration4–15 seconds (default 5)Slider or input
AudioAuto-generatedAlways on
Reference videoYesUsed for “video continuation” — the reference is prepended to the generated content
Pricing: per-second; full formula (including the minimum-billable table for continuation) is in the Models page. The sidebar shows the exact total before you submit. Three modes:
  • Text-to-video: just write a prompt
  • Image-to-video: upload 1–2 frame images to control first/last frame
  • Video extension: upload a reference video (see the Extending a Video section below)
Adding a reference video makes the total credit cost depend on its length: billed seconds = “reference video duration + your chosen duration” (never below the minimum-billable floor). Longer reference videos cost more. The sidebar shows the exact total before you submit — confirm before clicking Generate.

Veo 3.1 — Fixed 8-second Video with Audio

OptionValuesDescription
Aspect ratio16:9, 9:16Landscape or portrait only
Resolution720pFixed
Duration8 seconds (fixed)Not adjustable
StyleAuto / Realistic / Anime / CinematicVisual style preset
SpeedAuto / Slow / FastPlayback speed control
AudioAuto-generatedAlways on
Pricing: 20 credits per video, flat.
Both video models auto-generate AI audio — no separate audio input is needed. Full specs, pricing formulas, and a side-by-side comparison are on the Models page.

Converting Images to Video

After generating an image, hover over the completed card and click the Animate button. This sends your generated image to the video sidebar as the first frame, making it easy to turn a still image into a short video clip.

Extending a Video

Seedance 2.0 only. You can take an existing video as a starting point and generate a new segment appended to it.

Three entry points

  1. From a video card: hover over any completed video card and click the Extend button in the top-left corner — the video loads into the Seedance sidebar as a reference
  2. Drag a video to the reference-image area: drag a video (local or from the gallery) onto the reference upload area — a confirmation popup asks “Use this as an extension reference?”
  3. Direct upload: in the Seedance sidebar’s “Extend or reference video” upload area, pick a local video

Workflow

  1. Trigger any of the entries above — the reference video appears in the sidebar preview
  2. The prompt box is auto-prefilled with "Extend this video with the following plot: " — keep this prefix and add your desired continuation
  3. Pick resolution and your new video’s duration (4–15 seconds)
  4. Click Generate

Key behavior

Your prompt must explicitly say to extend the video. If the prompt only describes a scene without mentioning extension, the model defaults to treating the video as a reference (not temporally concatenated). The auto-prefilled Extend this video with the following plot: prefix exists precisely to avoid this pitfall — keep it and append your content.
Output = reference video + new content, concatenated: the result is a single continuous video — the first half is your reference video preserved, the second half is the AI-generated continuation per your prompt. Total length ≈ reference duration + your chosen duration.

Pricing

Extension rates are lower (480p 8 credits/sec, 720p 16 credits/sec), but billed seconds include the reference video’s duration (never below the minimum-billable floor). The full formula, lookup table, and examples are in Models → Seedance 2.0. Typical costs:
  • Extend by 5 sec on a 3-second video: 72 credits (480p)
  • Extend by 10 sec on a 5-second video: 272 credits (720p)

Generation History

After submitting a generation request, your images appear as cards in the generation history panel (accessible via the floating dock’s History button or the sidebar).

Pending State

Cards show live progress while generating; video generations also display an estimated completion time (e.g., 2–5 min). Typical generation times by model:
ModelTypical Time
Z Image Turbo~5 seconds
Nanobanana 2, Seedream~15 seconds
Nanobanana Pro~15 seconds
GPT image1.5~20 seconds
Niji 7~60 seconds
Seedance 2.0 (video)~1–3 minutes (varies by duration / resolution)
Veo 3.1 (video)2–6 minutes

Failed State

If a generation fails, the card shows:
  • The reason for failure
  • “Credits refunded” note — credits are automatically returned when a generation fails
  • Retry button — click to resubmit the same request with the original prompt and settings

Completed State

Hover over a completed generation card to reveal action buttons:
ActionDescription
EditAdds the generated image as a reference for a new generation
CutoutRemoves the background, creating a transparent PNG (useful for logos, stickers, product compositing). Creates a new card; the original is preserved
AnimateConverts the still image into a video (image-to-video)
DeleteRemoves the card from your history
Use IdeaCopies the prompt back to the editor for reuse
FavoriteSaves to your favorites collection
DownloadSaves the image to your device
Share to XOpens a pre-filled tweet with your generated image
The card also shows the aspect ratio and resolution labels in the bottom-left corner.

Credits and Billing

MeiGen uses a dual credit system:
  • Daily credits: refresh every 24 hours, used for basic models (Z Image Turbo, GPT image 1.5).
  • Purchased credits: obtained by topping up, never expire, usable on all models.
Generating with premium models (Nanobanana, Seedream, Midjourney, Seedance, Veo, etc.) requires purchased credits. API Token calls only use purchased credits, regardless of the model; daily credits never apply to the API path. The credit cost is shown on the Generate button before you submit. Hover over the credits card at the bottom of the left sidebar to see your balance breakdown.

Free vs Paid

FeatureFreePaid
4K resolutionLockedAvailable
Images per request1Up to 4
API accessNot availableAvailable (purchased credits only)

FAQ

Generation times vary by model and server load. Niji 7 typically takes ~60 seconds, and Veo 3.1 video generation takes 2–6 minutes. The progress bar uses an estimate — actual time may vary during peak hours.
Credits are automatically refunded when a generation fails. The refund is reflected immediately in your balance. You can use the Retry button to try again.
Use the Retry button on a failed card, or click Use Idea on a completed card to copy its prompt back to the editor. Note that AI generation is non-deterministic — even with the same prompt, results will vary.
Cutout removes the background from a still image, creating a transparent PNG. Animate converts the still image into a short video clip using the video generation model.
4K resolution is available to paid users only. Free users can generate at 2K resolution. Upgrade your account to unlock higher resolutions.