Clearing 2026: why UK universities need recruitment intelligence
UK universities face a deficit crisis. Student Recruitment Intelligence can transform Clearing from chaos to precision.

This solution allows for data, contained within BigQuery to be queried and stored in Firestore where it can then be accessed on the fly by server-side GTM. This will allow for tags to be augmented with additional information before they are sent to their endpoints. For example:
A purchase event on the client-side GTM container triggers an HTTP request to the server-side container
↓
HTTP request contains event name ‘Purchase’ and item id ‘14589’
↓
Item id is used to look up item information using a Firestore lookup variable
↓
All information about that specific item is returned back to the server container where it can be added to the event tag before it is sent to GA4

The Google cloud services used are:
Something you will need to have in place before proceeding:
You also need to enable these services:
Create a new service account, where the account should be able to do the following:
First, we need to create a pub/sub-triggered cloud function that will write the necessary data to Firestore. There is no need to alter the code below as all of the information required will be contained within the pub/sub message that triggers the function.
The setting to apply to the function are:
Then attach the service account created earlier.
{
"name": "gcf-cloudstorage-to-firestore",
"version": "1.0.0",
"dependencies": {
"@google-cloud/storage": "^5.20.3",
"firebase-admin": "^10.2.0",
"split": "^1.0.1"
}
}'use strict';
const admin = require('firebase-admin');
const {Storage} = require('@google-cloud/storage');
const split = require('split');
/**
* Triggered from a Pub/Sub message.
*
* @param {!Object} event Event payload.
* @param {!Object} context Metadata for the event.
*/
exports.loadCloudStorageToFirestore = async(event, context) => {
const pubSubMessage = event.data ? Buffer.from(event.data, 'base64').toString(): '{}';
const config = JSON.parse(pubSubMessage);
console.log(config)
if (typeof config.projectId != 'undefined') {
const projectId = config.projectId;
const bucketName = config.bucketName;
const bucketPath = config.bucketPath;
const firestoreCollection = config.firestoreCollection;
const firestoreKey = config.firestoreKey;
console.log(`Initiated new import to Firebase: gs://${bucketName}/${bucketPath}`)
// Init Firebase
if (admin.apps.length === 0) {
admin.initializeApp({ projectId: projectId })
}
// Init Storage
const storage = new Storage()
const bucket = storage.bucket(bucketName);
const file = bucket.file(bucketPath);
let keysWritten = 0;
try {
// TO-DO: Remove old records
// Read file and send to Redis
file.createReadStream()
.on('error', error => reject(error))
.on('response', (response) => {
// connection to GCS opened
}).pipe(split())
.on('data', async record => {
if (!record || record === "") return;
keysWritten++;
const data = JSON.parse(record);
const key = data[firestoreKey].replace(/[/]|\./g, '');
try {
await admin.firestore().collection(firestoreCollection).doc(key).set(data)
} catch(e) {
console.log(`Error setting document: ${e}`);
}
})
.on('end', () => {
console.log(`Successfully written ${keysWritten} keys to Firestore.`);
})
.on('error', error => reject(error));
} catch(e) {
console.log(`Error importing ${bucketPath} to Firestore: ${e}`);
}
}
};Once finished, deploy the function and wait to see that it throws no errors.
Create a new Google workflow and apply the service account created earlier. You will need to copy in the below YAML code which will created a visual representation of the workflow:
- init:
assign:
- project_id: "<your-project-id>"
- bq_dataset_export: "<your-bq-dataset-for-export-table>"
- bq_table_export: "<your-bq-tablename-for-export-table>"
- bq_query: >
select
user_id,
device_first,
channel_grouping_first
from
`bigquery.table`
- gcs_bucket: "<your-export-bucket>"
- gcs_filepath: "firestore-export/firestore-export.json"
- pubsub_topic: "<your-pubsub-topic-name>"
- pubsub_message: {
"projectId": "<your-firestore-project-id>",
"bucketName": "<your-export-bucket>",
"bucketPath": "firestore-export/firestore-export.json",
"firestoreCollection": "<your-firestore-collection>",
"firestoreKey": "<your-key-to-use-as-firestore-document-id>"
}
- bigquery-create-export-table:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${project_id}
body:
configuration:
query:
query: ${bq_query}
destinationTable:
projectId: ${project_id}
datasetId: ${bq_dataset_export}
tableId: ${bq_table_export}
create_disposition: "CREATE_IF_NEEDED"
write_disposition: "WRITE_TRUNCATE"
allowLargeResults: true
useLegacySql: false
- bigquery-table-to-gcs:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${project_id}
body:
configuration:
extract:
compression: NONE
destinationFormat: "NEWLINE_DELIMITED_JSON"
destinationUris: ['${"gs://" + gcs_bucket + "/" + gcs_filepath}']
sourceTable:
projectId: ${project_id}
datasetId: ${bq_dataset_export}
tableId: ${bq_table_export}
- publish_message_to_pubsub:
call: googleapis.pubsub.v1.projects.topics.publish
args:
topic: ${"projects/" + project_id + "/topics/" + pubsub_topic}
body:
messages:
- data: ${base64.encode(json.encode(pubsub_message))}You will need to add in some values here which which will be used to access BigQuery, run a query, store it in GCP and then trigger a pub/sub message containing the necessary Firestore information. It is important to note that when defining your table and data set variables, these will be where the query results are stored, DO NOT use the same dataset and table name as the table being queried or all data will be replaced with the query result.
Some descriptions on the information needed from above:
Now that you have the function, bucket, pub/sub topic and workflow in place, run the workflow to check that all works as it should. You may not immediately see the data appear in Firestore, it may take 5 or 10 mins for it to appear, but once it does, you should see something like this:

The collection name is on the far left, and the list of added documents in the center and the key-value pairs contained within each document on the right. If you do not see the data after 5 or 10 minutes, here are some things to check.
Now that all of the data you wish to access is within Firestore, we can create a ‘Firestore Lookup’ variable from within ssGTM in order to retrieve it for use in our tags.
To do this, head to your ssGTM instance and navigate to the variables. Select new under ‘User-Defined Variables’ and then select ‘Firestore Look-up’. What this variable allows you to do is access information within Firestore based on some look-up value.

Above you can see an example configuration of the Firestore look-up variable. Here is a description of each field:
One other thing of note. By default, ssGTM will look in the Firestore database of the project the container is provisioned on but, you can specify another GCP project to look up the value in. This is very handy as many people will have one project for their server and a second for their data warehouse. In order to look up a value in another project, the ssGTM container project needs to have the correct permissions in the project in which the data lives. To do this simply set up a service account with the correct permissions.
Now that all is set-up we can preview our ssGTM container to check that the expected value is appearing in our new Firestore look-up variable.

As you can see in our test the expected value is returned and can now be sent on with any tag we wish.
This has the potential to be a very powerful way of enriching events with new useful information before they are sent from the server. Obvious examples are things like product data and user information but I am sure more and more new and interesting use cases will reveal themselves.
Most of the information in this post is adapted from two blogs, and code is also borrowed from a Stacktonic git repository that has an accompanying blog post (see below).
Cloud function code and workflow code from Krisjan Oldekamp at Stacktonic.
Simo Ahava's blog on enriching server-side data with Cloud Firestore.
Krisjan Oldekamp's blog on exporting BigQuery data to Google Firestore and GTM server.
UK universities face a deficit crisis. Student Recruitment Intelligence can transform Clearing from chaos to precision.
At the start of the year, if you’d asked us whether Measurelab would be standing shoulder to shoulder with Europe’s biggest consultancies by September, we would've been surprised. Not because we don't believe in ourselves, but because these things feel so distant - until suddenly, they’re not. So, here it is: we’ve been awarded the Marketing Analytics Services Partner Specialisation in Google Cloud Partner Advantage. What’s the big deal? In Google’s own words (with the obligatory Zs): “Spec
BigQuery just got a major upgrade, you can now plug directly into Vertex AI using the new AI.GENERATE function. Translation: your analytics data and generative AI are now best friends, and they’re hanging out right inside SQL. That opens up a whole world of new analysis options for GA4 data, but it also raises some questions: * How do you actually set it up? * What’s it good for (and when should you avoid it)? * Why would you batch the query? Let’s walk through it step by step. Step 1: H