lz4stream and lzsa2stream for Z80 with 256bytes buffer
http://mydocuments.g2.xrea.com/html/p6/randd.html
Oh that's quite nice! Highly customizable.
Does that mean that from the dozens of packars available (for example, one list: https://github.com/uniabis/z80depacker), none of them are able to do partial depackings as I need?
Not sure if you want a buffer size of 256 bytes, which will give likely poor compression.
If you can spare the whole 16 kb as decompress buffer, you can hack in a pause every 256 bytes with some modifications to existing depackers.
To give some idea you could have a look or try the following depacker for pletter 0.4a which I modified for a circular buffer of 4 kb with 2176 bytes look back distance and 256 bytes pauses. (not fully tested)
; decompressor for pletter 0.4a ; modfied with circular 4kb buffer (to be aligned such reg D bit 4 = 0) ; and pause every 256 bytes ; start with: call pakuit (hl->compressed data, de->circular buffer) ; returns z-flag if completed, no data extracted if total data is ; rounded by 256 bytes! otherwise de->data ; nz-flag if paused, with de-> to data (256 bytes). ; ; resume unpacker for next 256 bytes with: call pakuit.resume module uitpakker uitpakmode = 1 ; 2176 bytes look back pause_on byte 0 ; place somewhere in ram regdump ; place somewhere in ram .ra word 0 .rbc word 0 .rde word 0 .rhl word 0 .xde word 0 .xhl word 0 .phl word 0 @pakuit_resume = pakuit.resume @pakuit ld iy,.loop ld a,128 exx ld de,1 exx .looplit ldi inc e ; test if e = 0 => 256 bytes written dec e ; jr z,.pause_on_lit ; if so, pause unpacker .resume_lit ; res 4,d ; make sure de cylces through 4kb .loop add a,a jp nz,.hup ld a,(hl) inc hl rla .hup jr nc,.looplit exx ; > ld l,e ld h,d .getlen add a,a call z,.getbyteexx jr nc,.lenok add a,a call z,.getbyteexx adc hl,hl jp nc,.getlen exx ; < dec d ; de -> to start last few bytes if not rounded by xor a ; 256 bytes, otherwise no data extracted !!! ld (pause_on),a ret .pause_on_lit ld (regdump.ra),a ; register dump ld (regdump.rbc),bc ld (regdump.rde),de ld (regdump.rhl),hl exx ld (regdump.xde),de ld (regdump.xhl),hl exx dec d ; de -> to start 256 bytes ld a,1 ld (pause_on),a ; id for resume on lit or a ret .lenok inc hl exx ; < ld c,(hl) inc hl ld b,0 if uitpakmode !=8 bit 7,c jp z,.offsok add a,a call z,.getbyte if uitpakmode !=9 rl b add a,a call z,.getbyte if uitpakmode !=0 rl b add a,a call z,.getbyte if uitpakmode !=1 rl b add a,a call z,.getbyte if uitpakmode !=2 rl b add a,a call z,.getbyte if uitpakmode !=3 rl b add a,a call z,.getbyte endif endif endif endif endif rl b add a,a call z,.getbyte jr nc,.offsok or a inc b res 7,c .offsok endif inc bc push hl ; not forget this push!!! exx ; > push hl exx ; < ld l,e ld h,d set 4,h ; make sure hl cylces through 4kb sbc hl,bc ; res 4,h ; pop bc .lnlus ; ldi ; replace ldir by ldi inc e ; test if e = 0 => 256 bytes written dec e ; jr z,.pause_on_lenok ; if so, pause unpacker .resume_lenok ; res 4,d ; make sure de cylces through 4kb res 4,h ; make sure hl cylces through 4kb inc c ; dec c ; jr nz,.lnlus ; inc b ; dec b ; jr nz,.lnlus ; pop hl jp iy .pause_on_lenok ld (regdump.ra),a ld (regdump.rbc),bc ld (regdump.rde),de ld (regdump.rhl),hl exx ld (regdump.xde),de ld (regdump.xhl),hl exx pop hl ; remember the push! ld (regdump.phl),hl dec d ; de -> to start 256 bytes ld a,2 ld (pause_on),a or a ret .resume ld a,(pause_on) dec a jr nz,.restore_lenok .restore_lit ld a,(regdump.ra) ld bc,(regdump.rbc) ld de,(regdump.rde) ld hl,(regdump.rhl) exx ld de,(regdump.xde) ld hl,(regdump.xhl) exx ld iy,.loop jp .resume_lit .restore_lenok ld a,(regdump.ra) ld bc,(regdump.rbc) ld de,(regdump.rde) ld hl,(regdump.phl) push hl ; restore the push! ld hl,(regdump.rhl) exx ld de,(regdump.xde) ld hl,(regdump.xhl) exx ld iy,.loop jr .resume_lenok .getbyte ld a,(hl) inc hl rla ret .getbyteexx exx ld a,(hl) inc hl exx rla ret
ZX0 has a feature Compressing with prefix
It can be useful, but I have a doubt:
What's the command to compress the first 256 bytes (0-255), and the commando to the next 256 bytes (256-511) and so on? The docs aren't clear about this.
Thanks
Assuming the compression ratio of each chunk really benefits from having the previous chunk as a reference... I would use two consecutive buffers of 256b (512b in total).
- Split your data in 256b files.
- Compress the 1st block (zx0 data1). This will be unpacked to the first buffer.
- Compress the 2nd block using the 1st block as prefix (copy /b data1+data2 data12, then zx0 +256 data12). This will be unpacked to the second buffer (assuming the 1st block is in the first buffer).
- Compress the 3rd block backwards using the 2nd block as suffix (copy /b data3+data2 data32, then zx0 -b +256 data32). This will be unpacked backwards to the first buffer (assuming the 2nd block is in the second buffer).
- Compress the 4th block using the 3rd block as prefix (copy /b data3+data4 data34, then zx0 +256 data34). This will be unpacked to the second buffer (assuming the 3rd block is in the first buffer).
- Compress the 5th block backwards using the 4rd block as suffix (copy /b data5+data4 data54, then zx0 -b +256 data54). This will be unpacked backwards to the first buffer (assuming the 4th block is in the second buffer).
- Repeat this sequence alternating buffers (and forward/backwards decompression routines).
Note: untested