Skip to Main Content

DevOps, CI/CD and Automation

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

fstream can't read or write files >4GB even with -m64?

2813865Dec 12 2014 — edited Dec 16 2014

Using C++-style streams, I have been unable to read or write files >4GB. For example, when writing 5GB in one go, I get only 1GB. The header for write uses the type streamsize for the size parameter, and according to sizeof, streamsize is 8 bytes (when using -m64). Looking at the assembly for a simple testcase, I verified that size parameter was not truncated before calling the write function.

What makes this interesting is that I can read or write large files if I do it in the C-style (fopen, fwrite, fclose) or if I use the C++ functions and manually do it in 1GB blocks. I tried the instructions for LFS, but they didn't work and it seems like it shouldn't since this is C++ and I am building a 64-bit app. I have observed this with solaris studio 12.3 and 12.4.

Is this a bug or am I missing a compiler flag?

A simple testcase to expose this:

#include <iostream>

#include <fstream>

#include <stdio.h>

using namespace std;

int main() {

  streamsize goal_size = ((int64_t) 1<<32) + ((int64_t) 1<<30);

  cout << goal_size << endl;

  char* data = new char[goal_size];

  fill(data,data+goal_size,'c');

  FILE* f = fopen("classic.data","w");

  fwrite(data,1,goal_size,f);

  fclose(f);

  fstream file("stream.data", std::ios::out | std::ios::binary);

  file.write(data,goal_size);

  file.close();

}

This post has been answered by Fedor-Oracle on Dec 15 2014
Jump to Answer
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jan 13 2015
Added on Dec 12 2014
7 comments
1,565 views