Latest Post

Google issues security alert for Samsung Smart Phones BYD released smart watch with keyless function for BYD vehicles

Google’s Parti generator relies on 20 billion parameters to turn text into lifelike images



Google’s Parti generator relies on 20 billion parameters to turn text into lifelike images.

Google on Thursday presented an update on the Parti text-to-image generator project, which has been able to create lifelike images trained on 20 billion inputs.

However, to avoid the risk of bias in AI datasets, the company did not use publicly released text for training.

Google's Parti generator relies on 20 billion parameters to turn text into lifelike images

(Source: Google Parti )

It is reported that Parti’s full name is “Pathways Autoregressive Text-to-Image” (Pathways Autoregressive Text-to-Image).

As the number of available parameters grows, the output image can also be more realistic.

In this example, Parti has studied parameters on the order of 20 billion before generating the final image.

In contrast, Imagen is a text-to-image generator designed by Google for diffusion learning.

During its work, it trains a computer model by adding “noise” to the image to initially generate a blurred still image, and the model then learns to try to decode the still image.

As the model improves, the system can gradually turn a series of random points into the lifelike regenerated image we end up seeing.

Finally, apart from Parti and Imagen, we have heard of other text-to-image models – such as Dall-E, VQ-GAN+CLIP, and Latent Diffusion Models.