In the first version of our semantic search implementation in Oracle APEX, everything hinged on manually written descriptions. Someone — maybe you, maybe a user — had to describe each image in words. Then we’d send that text to OpenAI to generate embeddings, store those vectors, and use them to build a smarter kind of search.
It was powerful — but fragile. No description meant no embedding. No embedding meant no search.
And that quickly brought up a common problem: what if all we have is an image?
No title, no caption, no hint. Just raw pixels and a blank description field.
That’s when we decided to flip the script: let AI do the seeing.
In this article, I’ll walk you through how we extended our system to generate descriptions automatically using OpenAI’s vision API. When a user uploads an image, we send it off to the model, receive a meaningful description, and store it — just like we would with manually written content.
Then, as before, we generate a vector embedding from that description and plug it right back into our semantic search flow.
In other words: users can now search for ideas, scenes, and concepts — even when nobody ever described the image in the first place.
We’re no longer searching what was written.
We’re searching what’s there.
How It Works: Step by Step Inside Oracle APEX
When a user uploads an image through the form on Page 110, a PL/SQL process named PR_GENERATE_COMPLETIONS_OPENAI
is triggered automatically. That process does all the heavy lifting: it reads the image, converts it to Base64, builds the request to OpenAI, parses the response, and stores the AI-generated description in the database.
Here’s a quick look at how this is structured inside the APEX builder:

PR_GENERATE_COMPLETIONS_OPENAI PL/SQL process that automates image-to-text conversion using OpenAI.
Let’s break it down. This is what the backend logic does:
-
It retrieves the uploaded image and its MIME type from the database.
-
Converts the binary into a Base64-encoded string (cleaned of line breaks and control characters).
-
Constructs a JSON payload that sends the image (as a
data:
URI) along with a simple question: “What’s in this image?” -
Makes a REST call to OpenAI’s
chat/completions
endpoint. -
Parses the response and extracts the model’s answer.
-
Stores that description back in the
description
field of your table.
Here’s a simplified snippet of the core logic:
The user doesn’t have to write a single word. APEX and OpenAI take care of everything behind the scenes.
Next Step: Embedding the Description for Semantic Search
Once the AI has described the image and stored that text in the description
column, the next job is turning that text into a vector. Why? Because our semantic search needs numbers — not just words — to compare meaning in a measurable way.
This is where embeddings come in. And we’re using OpenAI’s text-embedding-3-small
model to get the job done.
When the user submits the form, we trigger a second APEX process:PR_GENERATE_EMBEDDINGS_OPENAI
.
This process takes the description
, sends it to OpenAI’s embeddings endpoint, and saves the resulting vector — a long list of floating-point numbers — into the embedding
column.
Here’s how it works, step by step:
-
We fetch the existing description from the database (
vs_artworks
). -
We build the JSON request, specifying the input text and the embedding model.
-
We send the request to OpenAI’s
https://api.openai.com/v1/embeddings
endpoint. -
We extract the returned list of numbers from the JSON response.
-
We convert it into a format that Oracle understands, using a helper function.
-
We store the result in the
embedding
column — ready for semantic comparison.
Below is the simplified and fully working PL/SQL block behind PR_GENERATE_EMBEDDINGS_OPENAI
:

Oracle APEX App Builder with the PR_GENERATE_EMBEDDINGS_OPENAI process selected. This process sends a description to OpenAI’s embeddings API and stores the returned vector in the database.
Now, with the vector stored, this record is fully ready to be queried via VECTOR_DISTANCE
. We’ve completed the pipeline: from image to text, and from text to vector. And all of it, automated.
For more information, please visit our company website. Stay updated and feel free to reach out for support or contributions!
Leave A Comment