Microsoft Fabric lets you dynamically configure the number of vCores for a Python notebook session at runtime — but only when the notebook is triggered from a pipeline. If you run it interactively, the parameter is simply ignored and the default kicks in.
This is genuinely useful: you can right-size compute on a job-by-job basis without maintaining separate notebooks. A heavy backfill pipeline can request 32 cores; a lightweight daily refresh can get by with 2.
How It Works
Place a %%configure magic cell at the very top of your notebook (before any other code runs):
%%configure{ "vCores": { "parameterName": "pipelinecore", "defaultValue": 2 }}
The parameterName field ("pipelinecore" here) is the name of the parameter you’ll pass in from the pipeline’s Notebook activity. The defaultValue is the fallback used when no parameter is provided — or when you run the notebook interactively.
Fabric supports vCore counts of 4, 8, 16, 32, and 64. Memory is allocated automatically to match.
Wiring It Up in the Pipeline
In your pipeline, add a Notebook activity and open the Base parameters tab. Create a parameter named pipelinecore of type Int and set the value to @item().

When the pipeline runs, Fabric injects the value into %%configure before the session starts.
A Neat Trick: Finding the Right Compute Size
Because the vCore value is just a pipeline parameter, you can use a ForEach activity to run the same notebook across multiple core counts in sequence — great for benchmarking or profiling how your workload scales.
Set up a pipeline variable cores of type Array with a default value of [64,32,16,8,4,2]:

Then configure the ForEach activity with:
- Items:
@variables('cores') - Sequential: ✅ checked (so runs don’t overlap)
- Concurrency: 1


Inside the ForEach, add a Notebook activity and set the pipelinecore base parameter to @item(). Each iteration picks the next value from the array and passes it to the notebook, so you get a clean sequential run at 64, 32, 16, 8, 4, and 2 cores.
and of course the time, always change it to a more sensible value

Beyond Benchmarking: Dynamic Resource Allocation
The benchmarking pattern is useful, but the more powerful idea is using this in production. Because the vCore count is just a number passed through the pipeline, nothing stops a first stage of your pipeline from deciding what that number should be.
Imagine a pipeline that starts by scanning a data lake to count the number of files or estimate the volume of data to process. Based on that output, it computes an appropriate core count and passes it to the notebook that does the actual work — 4 cores for a small daily increment, 32 for a full month’s backfill, 64 for a one-off historical load. The notebook itself doesn’t change; the compute scales to the workload automatically.
This kind of adaptive orchestration is normally something you’d build a lot of custom logic around. Here it’s just a parameter.
The Catch: Interactive Runs Use the Default
This only works end-to-end when triggered from a pipeline. Running the notebook manually in the Fabric UI will silently use the defaultValue — there’s no error, the parameter just won’t be overridden. Keep that in mind when testing.
Tested on Microsoft Fabric as of April 2026. Official reference: Develop, execute, and manage notebooks and Python experience on Notebook.
Written by Claude · Special thanks to Engineering for sharing this technique.