[Azure Data Factory] Data Flow Sink is extremely slow

Hanna Melashkina 0 Reputation points
2026-02-02T16:10:32.27+00:00

I'm copying about 800 items from SAP to Azure Fabric. I'm running pipeline with data flow in Azure Data Factory (ADF). We are using Self-hosted integration runtime. The Data Flow is not complex, it just copies data and adds an additional column. There are no complex transformations.

However, the Sink activity takes around 10 minutes!

Sink configuration:

  • Sink type: dataset
  • Dataset: Fabric warehouse
  • Allow schema drift: true
  • Allow insert: true
  • Batch size: 100
  • Enable staging: false (I cannot turn it on; otherwise, I get an error in my pipeline: "Fabric DW supports staging only via copy command - EXE-0001, Dataflow cannot be analyzed as a graph.")

I also tried using Copy activity as suggested here, but it keeps failing with a BULK insert error:

Failure happened on 'Sink' side. ErrorCode=SqlOperationFailed, 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException, Message=A database operation failed. Please search error to get more details., Source=Microsoft.DataTransfer.Connectors.MSSQL, ''Type=Microsoft.Data.SqlClient.SqlException, Message=Incorrect syntax near 'INSERT BULK'., Source=Framework Microsoft SqlClient Data Provider,'

Some other observations:

Sink processing time: 10m 33s

Stage time: 7m 36s

Table operations SQL duration: 2m 30s

I also checked in the warehouse's long_running_queries and saw that INSERT was called 840 times, with each insert taking around 450ms. That means 800 * 450ms = 360,000ms (6 minutes)!

My Question: Why is ADF inserting items one by one and ignoring the batch size of 100? Since I cannot enable Staging without errors and BULK insert fails, what is the correct configuration to load data from SAP to Fabric efficiently?

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Q&A Assist
    2026-02-02T16:10:51.77+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The slow performance you are experiencing with the Data Flow Sink in Azure Data Factory (ADF) could be attributed to several factors. Since you mentioned that the sink is inserting items one by one despite the batch size configuration, here are some considerations and suggestions to improve the performance:

    1. Batch Size Configuration: Ensure that the batch size is set correctly in the sink settings. Although you mentioned that the batch size is set to 100, if the sink is not configured to handle batch inserts effectively, it may still process each insert individually.
    2. Staging: Since you cannot enable staging due to the error you encountered, it limits the ability to optimize the data loading process. Staging can significantly improve performance by allowing bulk operations. If possible, investigate the error further to see if there is a way to resolve it or consider using a different approach that allows staging.
    3. Copy Activity: You mentioned trying the Copy activity but encountering a BULK insert error. This might indicate that the configuration for the Copy activity is not set up correctly. Ensure that the allowCopyCommand property is set to true, which allows the use of the COPY statement for loading data into the Fabric Warehouse. This can help in achieving better performance.
    4. Error Handling: Review the error handling settings in your Data Flow. If the flow is configured to stop on errors, it may be causing delays. Consider enabling options that allow the flow to continue processing even if some rows encounter errors.
    5. Performance Monitoring: Utilize the monitoring tools available in Azure Data Factory to identify bottlenecks. Check for any throttling issues or resource constraints that may be affecting the performance of your integration runtime.
    6. Parallelism: If your source data allows, consider increasing the degree of parallelism in your data flow. This can help in processing multiple records simultaneously, reducing the overall time taken for the sink operations.

    By addressing these areas, you may be able to improve the performance of your data loading process from SAP to Azure Fabric.


    References:

    0 comments No comments

  2. Manoj Kumar Boyini 5,795 Reputation points Microsoft External Staff Moderator
    2026-02-02T16:45:09.79+00:00

    Hi Hanna Melashkina,

    SAP is not visible as a source in ADF Copy Activity because the SAP connector you are using is supported in Mapping Data Flow (Spark runtime) and not exposed in the Copy Activity data movement runtime. This is a product capability difference, not a configuration issue.

    Because of this, SAP extraction must be done using Mapping Data Flow in your environment.

    When Mapping Data Flow writes directly to Fabric Warehouse, it cannot use Fabric’s required COPY INTO ingestion path. As a result, Data Flow falls back to row-by-row INSERT statements, and the batch size setting does not enforce bulk loading.

    This is why you observed:

    Hundreds of INSERT statements

    ~10 minutes sink time for ~800 rows

    You must use a two-step pipeline:

    Extract from SAP(Mapping Data Flow)
    Source: SAP (via Self-Hosted Integration Runtime)
    Sink: ADLS Gen2 (Parquet recommended, CSV supported)
    Minimal transformation (for example, add the extra column here)

    Load Into Fabric Warehouse (Copy Activity)
    Source: ADLS Gen2 files from Step 1
    Sink: Fabric Warehouse (WarehouseSink)
    Enable COPY command (allowCopyCommand = true)

    This allows Fabric to use server-side COPY INTO, which performs a true bulk load and avoids row-by-row INSERTs.

    For your reference, please review the following Microsoft Learn documentation. These articles provide the official guidance for SAP connectors, Fabric Warehouse ingestion, and the supported COPY INTO pattern that applies to your scenario:

    https://learn.microsoft.com/en-us/azure/data-factory/industry-sap-connectors https://learn.microsoft.com/en-us/azure/data-factory/sap-change-data-capture-shir-preparation
    https://learn.microsoft.com/en-us/azure/data-factory/connector-microsoft-fabric-warehouse?tabs=data…
    https://learn.microsoft.com/en-us/fabric/data-warehouse/ingest-data-copy https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview

    Hope this helps, Please let us know if you have any questions and concerns.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.