Now that version 2026.1 of SketchUp is rolling out, we are getting a clearer picture of where SketchUp’s AI implementation is heading. As you likely know, the SketchUp folks had previously released SketchUp Diffusion, a rendering and visualization solution. At the last Basecamp, they then also teased their AI Assistant, which was implemented as a chatbot that could show help topics, write Ruby code, and do various other things that are implemented within an extensible dialog.
As of the latest version, there is now a single toolbar button (yes, it’s YAMDB = yet another magic dust button) that brings up the SketchUp AI interface, which now looks like this:

There are two tools listed in there: AI Assistant and AI Render (the new iteration of SketchUp Diffusion). In addition, they introduced a credit system for the various AI tools where users get some credits as part of their subscription plans, but can also buy additional ones, as needed. This floating toolbar now always shows how many credits are left.
For this post, I will focus on the AI Render tool and discuss some standard use cases. I did this a while ago for Diffusion and therefore this post serves as an update to that one.
Interface

When you click on the AI Render button, a dialog window opens up – as shown above – that features a current view of the modeling environment and three floating toolbars. You can then start using this view as needed or reposition it (in SketchUp). If you have a previously generated rendering (like I have in the image), and you start generating new renderings, then it will use that image as a base (which allows you to iteratively get to where you want to go).

Similar to previous Diffusion versions, you can click on the text prompt button in the toolbar and then expand the settings with the slider button. In the prompt settings dialog that then opens (shown in the image on the right), you can then select a style preset (or stay with “Auto”), write a prompt (“Make this look like…”), and add some additional settings. Those allow you to enter a negative prompt (“But don’t do…”) for all presets and adjust model fidelity for some of them (e.g. how much the model geometry should be respected).
Back on the main dialog’s toolbar, you also have these functions (via respective toolbars):
- Erasing artifacts – You can paint over parts of the generated image to remove something.
- Regenerating parts of an image – After selecting those, you can add a prompt to describe those changes.
- Generating artifacts from a sketch – You can sketch over the generated image and describe what you want to generate based on your sketch.
The Plus button then lets you add the generated view to a SketchUp model as a tab, and the Save button lets you save that image externally as a file.
Aside from all of that, the right side of the main dialog features a gallery of previously created images, an overview of available credits, and a help button – in case you get stuck.
Overall, this tool is very self-explanatory and the most important thing to do is to simply try out all the functions. Fortunately, generating images is not too expensive (5 credits per image currently).
Examples
These are a few examples that I like to use to test the AI Rendering functionality. I used most of these previously with SketchUp Diffusion. However, since the underlying AI image generation engines have been changed and refined, it is worth using these for comparison here, too.
First up is a collection of SketchUp people cutouts that admittedly have not worked well with Diffusion in the past (faces and other body features just didn’t look realistic). As you can see in this comparison (use the slider), we are now in a much more realistic space (even though I am not really sure what is happening in the bottom right).


I used “Auto” as the style preset for the image above and simply asked for a park setting. As it turns out, you can change that in the dialog to “No style” and then provide a sample image instead. You can see below what happened when I took a Seurat painting (left) and uploaded that as a sample. Pretty neat!


TIP:
Try using one of your own hand sketches as the reference image to make the generated style consistent with your own.
Next up: The large-scale masterplan that shows some rough building models, context, trees, and a bit of an aerial plan image. With a basic prompt, this already looks great. Of course, I could have modeled more detail, especially at the perimeter.


The dining hall example model below did not have any scale figures, so I asked for them to be added in the context of a university campus. While I got a reasonably well-formed crowd, they all look a bit too old. When people are involved, it may make sense to always “seed” the image generator with actual people cutouts (at least SketchUp’s basic ones.


The next model only featured my “face cutout” vase sculpture. I then asked for it to be placed in a lush garden, which it did nicely. This shows on one hand model fidelity (the face is still quite visible) but is also a good approach for early conceptual modeling or idea generation.


As a second step, I tried the modification tools where I hand-sketched leaves on top of the planter and asked for tropical vegetation to be generated. I then also asked for the planter’s material to be changed to stone, which is of course something that I could have done in the original model, too. One thing to note is how my face slowly degrades between these steps.


My final test was intended to check reflections and lights – both are important features of any photorealistic rendering solution, of course. In my tests, these were the hardest to get right. Some runs with the Auto preset created images that were way too dark. Adding a comment about brightness in the prompt helped with that, though. Other attempts added some odd lights on top of existing lights. The image shown below is maybe the best version that includes reflections, uplights, and downlights. When you look close, however, the light cones don’t all line up and the reflections are a bit hit and miss. But diffuse lighting, reflections, and shadows worked well.


So, what does this all tell us? AI rendering – as implemented in SketchUp is getting better and can produce some great, quick visualizations with minimal effort. There is a learning curve and a testing period when it comes to prompting and iterative adjustments. All of these underlying AI models have their unique idiosyncrasies and users will likely need to test them thoroughly to see which workflow is best for them. It may also pay off to evaluate combining AI rendering with photorealistic rendering workflows.
Have you tried this new version? What do you think? Let me know in the comments.
Video
I am also discussing this topic in a video that I recently posted on YouTube. You can watch it here: