File Size and Record Limit

File Size and Record Limit

Postby Jack » Mon Oct 27, 2014 7:07 am

Hi,
I have a DBF file with 5 fields , it is an audit files .

The size is actualy 1 284 545 259 and the number of records is 5 400 000 .

This file is used to audit the modification and in only open durring 1 seconds each time a record is added .

Is there a limit for number of records and the size of a file .

Thanks !
Jack
 
Posts: 280
Joined: Wed Jul 11, 2007 11:06 am

Re: File Size and Record Limit

Postby MarcoBoschi » Mon Oct 27, 2014 8:59 am

I don't know the answer.
I create every day a different file for this purpose
man_20141027 (Today)
man_20141026 (Yesterday)
and so on
User avatar
MarcoBoschi
 
Posts: 1023
Joined: Thu Nov 17, 2005 11:08 am
Location: Padova - Italy

Re: File Size and Record Limit

Postby bpd2000 » Mon Oct 27, 2014 9:15 am

Regards, Greetings

Try FWH. You will enjoy it's simplicity and power.!
User avatar
bpd2000
 
Posts: 153
Joined: Tue Aug 05, 2014 9:48 am
Location: India

Re: File Size and Record Limit

Postby hua » Tue Oct 28, 2014 3:09 am

Jack,
If you use Harbour, see DOC\xhb-diff.txt. Below is an excerpt from it

In both compilers maximal file size for tables, memos and indexes is
limited only by OS and file format structures. Neither Harbour nor
xHarbour introduce own limits here.
The maximal file size for DBFs is limited by number of records
2^32-1 = 4294967295 and maximal record size: 2^16-1 = 65535 what
gives nearly 2^48 = 256TB as maximal .dbf file size.
The maximal memo format size depends on used memo type: DBT, FPT
or SMT and size of memo block. It's limited by maximal number of memo
blocks = 2^32 and size of memo block so it's 2^32*<size_of_memo_block>.
The default memo block size for DBT is 512 bytes, FPT - 64 bytes and
for SMT 32 bytes. So for standard memo block sizes the maximum are:
DBT->2TB, FPT->256GB, SMT->128GB. The maximal memo block size in
Harbour is 2^32 and minimal is 1 byte and it can be any value between
1 and 65536 and then any number of 64KB blocks. The last limitation
is introduced as workaround for some wrongly implemented in other
languages memo drivers which were setting only 16 bits in 32bit field
in memo header. Most of other languages has limit for memo block
size at 2^15 and the block size has to be power of 2. Some of them
also introduce minimal block size limits. If programmers plans to share
data with programs compiled by such languages then he should check
their documentation to not create memo files which cannot be accessed
by them.

Maximal NTX file size for standard NTX files is 4GB and it's limited
by internal NTX structures. Enabling 64bit locking in [x]Harbour change
slightly used NTX format and increase maximum NTX file size to 4TB.
The NTX format in [x]Harbour has also many other extensions like support
for multitag indexes or using record number as hidden part of index key
and many others which are unique to [x]Harbour. In practice all of CDX
extensions are supported by NTX in [x]Harbour.
The NSX format in [x]Harbour is also limited by default to 4GB but like
in NTX enabling 64bit locking extend it to 4TB. It also supports common
to NTX and CDX set of features.

The CDX format is limited to 4GB and so far [x]Harbour does not support
extended mode which can increase the size up to 2TB with standard page
length and it can be bigger in all formats if we introduce support for
bigger index pages. Of course all such extended formats are not binary
compatible with original ones and so far can be used only by [x]Harbour
RDDs though in ADS the .adi format is such extended CDX format so maybe
in the future it will be possible to use .adi indexes in our CDX RDD.

Of course all of the above sizes can be reduced by operating system (OS)
or file system (FS) limitations so it's necessary to check what is
supported by environment where [x]Harbour applications are executed.


A summarized version also can be read here - https://vivaclipper.wordpress.com/2013/ ... ze-limits/
FWH 11.08/FWH 19.12
BCC5.82/BCC7.3
xHarbour/Harbour
hua
 
Posts: 1050
Joined: Fri Oct 28, 2005 2:27 am

Re: File Size and Record Limit

Postby Jack » Tue Oct 28, 2014 6:54 am

Thanks for this info .
Jack
 
Posts: 280
Joined: Wed Jul 11, 2007 11:06 am

Re: File Size and Record Limit

Postby bpd2000 » Tue Oct 28, 2014 12:11 pm

2014-10-17 14:55 UTC+0200 Przemyslaw Czerpak (druzus/at/poczta.onet.pl)
* include/hbrddcdx.h
* src/rdd/dbfcdx/dbfcdx1.c
+ added support for large index files over 4GB length.
These are slightly modified CDX indexes which stores index page numbers
instead of index page offsets inside index file. This trick increase
maximum index files size from 2^32 (4GB) to 2^41 (2TB). This index
format is enabled automatically when DB_DBFLOCK_HB64 is used. This is
the same behavior as in DBFNTX and DBFNSX for which I added support
for large indexes (up to 4TB) few years ago.
Warning: new CDX indexes are not backward compatible and cannot be
read by other systems or older [x]Harbour versions.
If you try to open new indexes using older [x]Harbour RDDs
then RTE "DBFCDX/1012 Corruption detected" is generated.
When current Harbour *DBFCDX/SIXCDX RDD open index file
then it automatically recognize type of index file so it
will work correctly with both versions without any problem.
In short words: People using DB_DBFLOCK_HB64 should remember
that after reindexing with new Harbour applications old ones
cannot read new CDX indexes.
; In next step I plan to add support for user defined page size in CDX
index files.

* doc/xhb-diff.txt
* added information about extended CDX format to section "NATIVE RDDs"

* src/rdd/dbfcdx/dbfcdx1.c
* src/rdd/dbfnsx/dbfnsx1.c
* src/rdd/dbfntx/dbfntx1.c
* disable record readahead buffer used during indexing when only
one record can be stored inside
! generate RTE when data cannot be read into record readahead buffer
during indexing

best regards
Przemek
Regards, Greetings

Try FWH. You will enjoy it's simplicity and power.!
User avatar
bpd2000
 
Posts: 153
Joined: Tue Aug 05, 2014 9:48 am
Location: India


Return to FiveWin for Harbour/xHarbour

Who is online

Users browsing this forum: Google [Bot] and 74 guests