<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Generative AI | 複合知能メディア研究室</title><link>https://im.sanken.osaka-u.ac.jp/en/tags/generative-ai/</link><atom:link href="https://im.sanken.osaka-u.ac.jp/en/tags/generative-ai/index.xml" rel="self" type="application/rss+xml"/><description>Generative AI</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 30 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Image and Video Generation and Editing</title><link>https://im.sanken.osaka-u.ac.jp/en/topics/gen-ai/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>https://im.sanken.osaka-u.ac.jp/en/topics/gen-ai/</guid><description>&lt;h2 id="distribution-control-of-diffusion-models">Distribution Control of Diffusion Models&lt;/h2>
&lt;p>&lt;a href="https://doi.org/10.1016/j.patcog.2025.112614" target="_blank" rel="noopener">Weng Ian Chan, Hiroaki Santo, Yasuyuki Matsushita, Fumio Okura, &amp;ldquo;Instance-wise distribution control of text-to-image diffusion models&amp;rdquo; Pattern Recognition 172*(C):112614, 2026&lt;/a>&lt;/p>
&lt;p>Text-to-image diffusion models are increasingly used to generate synthetic datasets for downstream vision tasks. However, they often inherit biases from large-scale training data, which can result in unbalanced attribute distributions in the generated images. While prior efforts have attempted to mitigate these biases, most focus on single-object images and struggle to control attributes across object instances in multi-instance generations. To address this limitation, we propose an instance-wise control of the attribute distribution by fine-tuning diffusion models with guidance from a pre-trained object detector and an attribute classifier. Our approach aligns the attribute distribution over object instances in generated images with a user-defined distribution, which enables precise control over attribute proportions at the instance level. Experiments across various objects and attributes demonstrate that our method generates high-quality, multi-instance images that match the specified distribution, supporting the scalable creation of distribution-aware synthetic datasets for in-the-wild vision tasks.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="Distribution control" srcset="
/en/topics/gen-ai/chan-pr_hu_77cb27038f1bb73b.webp 400w,
/en/topics/gen-ai/chan-pr_hu_93e35abb8086b2fe.webp 760w,
/en/topics/gen-ai/chan-pr_hu_b48b71710b548e78.webp 1200w"
src="https://im.sanken.osaka-u.ac.jp/en/topics/gen-ai/chan-pr_hu_77cb27038f1bb73b.webp"
width="760"
height="433"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p></description></item></channel></rss>