Using C++-style streams, I have been unable to read or write files >4GB. For example, when writing 5GB in one go, I get only 1GB. The header for write uses the type streamsize for the size parameter, and according to sizeof, streamsize is 8 bytes (when using -m64). Looking at the assembly for a simple testcase, I verified that size parameter was not truncated before calling the write function.
What makes this interesting is that I can read or write large files if I do it in the C-style (fopen, fwrite, fclose) or if I use the C++ functions and manually do it in 1GB blocks. I tried the instructions for LFS, but they didn't work and it seems like it shouldn't since this is C++ and I am building a 64-bit app. I have observed this with solaris studio 12.3 and 12.4.
Is this a bug or am I missing a compiler flag?
A simple testcase to expose this:
#include <iostream>
#include <fstream>
#include <stdio.h>
using namespace std;
int main() {
streamsize goal_size = ((int64_t) 1<<32) + ((int64_t) 1<<30);
cout << goal_size << endl;
char* data = new char[goal_size];
fill(data,data+goal_size,'c');
FILE* f = fopen("classic.data","w");
fwrite(data,1,goal_size,f);
fclose(f);
fstream file("stream.data", std::ios::out | std::ios::binary);
file.write(data,goal_size);
file.close();
}