You are using fflate which does converts ASCII to binary as far as I know.
This understanding isn’t quite correct — fflate does technically output binary data, but in that case it’s still just wrapping larger plaintext source data. Note that USDZ uses zero-compression zipping, so the size of the zipped data is no smaller than the original. Data that is binary to begin with will generally be much smaller.
USD has ASCII and binary variants (USDA and USDC), and USDZ could contain either. This works quite differently than glTF vs GLB… USDC and GLB are pretty similar, but USDA is much less efficient than glTF1.
Probably that’s all beside the point though, because (1) three.js can’t write binary (USDC) data into the USDZ anyway, and (2) it’s entirely possible that the ASCII vs. binary difference isn’t big enough to explain the 50x increase you’re seeing. I’m not sure how to test that without comparing a binary-ified USDZ file, though, sorry.
If your source GLB used Draco or Meshopt compression, or quantization, that would be another explanation, because that compression won’t carry over into USDZ at all.
1 Best case:
.gltf references an external
.bin and the binary data is exactly the same as a
.glb… Worst case:
.gltf contains Data URIs for the binary data, which add ~25% to the data size. Either way, the vertex data in glTF is still never plaintext ASCII numbers like it would be in OBJ or USDA.