This is a quick guide on correctly reading Iceberg tables from BigQuery. Currently, there are two types of Iceberg tables in BigQuery, based on the writer:
BigQuery Iceberg Table
This is the table written using BigQuery engine

Iceberg Tables Written Using the BigQuery Metastore
Currently, only Spark is supported. I assume that, at some point, other engines will be added. The implementation is entirely open source but currently supports only Java (having a REST API would have been a nice addition).
How OneLake Iceberg Shortcuts Work
OneLake reads both the data and metadata of an Iceberg table from its storage location and dynamically generates a Delta Lake log. This is a quick and cost-effective operation, as it involves only generating JSON files. See an example here
The Delta log is added to OneLake, while the source data remains read-only. Whenever you make changes to the Iceberg table, new metadata is generated and translated accordingly. The process is straightforward.
BigQuery Iceberg Doesn’t Publish Metadata Automatically
BigQuery uses an internal system to manage transactions. When querying data from the BigQuery SQL endpoint, the results are always consistent. However, reading directly from storage may return an outdated state of the table.
For BigQuery Iceberg tables, you need to manually run the following command to update the metadata:
EXPORT TABLE METADATA FROM dataset.iceberg_table;
you can run it on a schedule, or make it the last step in an ETL pipeline.
Iceberg Tables Using the BigQuery Metastore (Written by Spark)
If the Iceberg table is written using the BigQuery metastore (e.g., by Spark), no additional steps are required. The metadata is automatically updated.
The interesting part about Iceberg’s translation to a Delta table in OneLake is that it is completely transparent to Fabric workloads. For example, Power BI simply recognizes it as a regular Delta table. 😊