Skip to content

azcopy.exe : panic: runtime error: invalid memory address or nil pointer dereference when moving Fabric Lakehouse data across regions #3156

@eugman

Description

@eugman

Which version of the AzCopy was used?

Note: The version is visible when running AzCopy without any argument

10.30.0

Which platform are you using? (ex: Windows, Mac, Linux)

Windows

What command did you run?

Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file.
azcopy copy `
  'https://onelake.dfs.fabric.microsoft.com/SourceWorkspace/Intake.lakehouse/Files' `
  'https://onelake.dfs.fabric.microsoft.com/DestinationWorkspace/Intake.lakehouse/Files' `
  --recursive=true --as-subdir=false `
  --trusted-microsoft-suffixes "fabric.microsoft.com" 

What problem was encountered?

Whenever I try to run the command I get the error below. I hit the same issue if I use the GUID style URLs (https://onelake.dfs.fabric.microsoft.com/aaaaaaaa-aaaa-aaaa-aaaa-c2c6f9447ea3/aaaaaaaa-aaaa-aaaa-aaaa-32a36a3e06c1/Files

Now, if I use --dry-run, that works without issue.

(base) PS C:\Users\eugme\Desktop\azcopy_windows_amd64_10.30.0> C:\Users\eugme\OneDrive - Eugene Meidinger\Customers\Dallas\Fabric data migration.ps1
WARN: more than one AzCopy process is running. It is best practice to run a single process per VM.
INFO: Scanning...
INFO: Autologin not specified.
INFO: Authenticating to destination using Azure AD
INFO: Authenticating to source using Azure AD
INFO: Any empty folders will be processed, because source and destination both support folders

Job 52ecdebd-76eb-4e4c-4f59-1ef0cebe1c7c has started
Log file is located at: C:\Users\eugme\.azcopy\52ecdebd-76eb-4e4c-4f59-1ef0cebe1c7c.log

INFO: Switching to blob endpoint to write to destination account. There are some limitations when writing between blob/dfs endpoints. Please refer to https://learn.microsoft.com/en-us/azure/storage/blobs/d
ata-lake-storage-known-issues#blob-storage-apis
azcopy : panic: runtime error: invalid memory address or nil pointer dereference
At C:\Users\eugme\OneDrive - Eugene Meidinger\Customers\Dallas\Fabric data migration.ps1:1 char:1
+ azcopy copy `
+ ~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (panic: runtime ...ter dereference:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
[signal 0xc0000005 code=0x0 addr=0x10 pc=0x15e3ed5]
goroutine 489 [running]:
github.com/Azure/azure-storage-azcopy/v10/ste.(*destReauthPolicy).Do(0xc0002c8930, 0xc00022ca40)
	D:/a/_work/1/s/ste/destReauthPolicy.go:63 +0x1f5
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc00022ca00)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-storage-azcopy/v10/ste.statsPolicy.Do({}, 0xc00022ca00)
	D:/a/_work/1/s/ste/xferStatsPolicy.go:175 +0x3b
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc000399558)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-storage-azcopy/v10/ste.logPolicy.Do({{{0xb2d05e00, 0x0}, 0xc0006b8210, 0xc0006b8228}, 0xc0006b6a50, 0xc0006b6a80, 0xc0006b6ab0}, 0xc00022c9c0)
	D:/a/_work/1/s/ste/xferLogPolicy.go:164 +0x4c5
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc00022c980)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-storage-azcopy/v10/ste.(*retryNotificationPolicy).Do(0xc000399818?, 0xc00022c980)
	D:/a/_work/1/s/ste/xferRetryNotificationPolicy.go:64 +0x2a
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc00022c940)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime.telemetryPolicy.Do({{0xc0006c4540?, 0x1ad2a20?}}, 0xc00022c940)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/runtime/policy_telemetry.go:70 +0x1b4
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc00022c900)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime.includeResponsePolicy(0xc00022c900)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/runtime/policy_include_response.go:19 +0x1c
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.PolicyFunc.Do(0x0?, 0xc0003999b8?)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:216 +0x19
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.(*Request).Next(0xc00022c8c0)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/request.go:146 +0xf0
github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported.Pipeline.Do({{0xc000481c80?, 0x0?, 0x0?}}, 0x0?)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/[email protected]/internal/exported/pipeline.go:76 +0x45
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/generated.(*BlockBlobClient).StageBlockFromURL(0xc000928b58, {0x1edce10?, 0xc000c02360?}, {0xc000330150?, 0x7a41d9?}, 0x10?, {0xc0001f2340?, 
0x7a41d9?}, 0x10?, 0x0, ...)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated/zz_blockblob_client.go:693 +0xd4
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob.(*Client).StageBlockFromURL(0xc000928b70, {0x1edce10, 0xc000c02360}, {0xc000330150, 0x30}, {0xc0001f2340, 0xc5}, 0x184fdc0?)
	C:/Users/cloudtest/go/pkg/mod/github.com/!azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob/client.go:232 +0x10e
github.com/Azure/azure-storage-azcopy/v10/ste.(*urlToBlockBlobCopier).GenerateCopyFunc.(*urlToBlockBlobCopier).generatePutBlockFromURL.func2()
	D:/a/_work/1/s/ste/sender-blockBlobFromURL.go:137 +0x353
github.com/Azure/azure-storage-azcopy/v10/ste.(*urlToBlockBlobCopier).GenerateCopyFunc.(*urlToBlockBlobCopier).generatePutBlockFromURL.createSendToRemoteChunkFunc.createChunkFunc.func4(0x4444443878303e67?
)
	D:/a/_work/1/s/ste/sender.go:207 +0x294
github.com/Azure/azure-storage-azcopy/v10/ste.(*jobMgr).chunkProcessor(0xc0003cd408, 0x9c)
	D:/a/_work/1/s/ste/mgr-JobMgr.go:1055 +0xab
created by github.com/Azure/azure-storage-azcopy/v10/ste.(*jobMgr).poolSizer in goroutine 338
	D:/a/_work/1/s/ste/mgr-JobMgr.go:957 +0x236

How can we reproduce the problem in the simplest way?

Try to move data between tow Fabric lakehouses in different regions. I don't know if it matters that in my case the source is a Fabric Trial and the target is a P1 Premium SKU.

Have you found a mitigation/solution?

Not yet.
EDIT: OneLake Explorer seems to be working.
EDIT 2: Fabric Pipelines addressed my issue

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions