There are a couple of problems with this. Unless a file is a random access file organization, typically it is a sequential access organization. That means the file is read sequentially. This includes .doc files. Buffering is simply a way to read and hold some rows of the file (almost never all at once) in computer memory rather than go back to the disk for each line of data. Buffering is something the operating system does transparently - you basically don't need to consider that as different from sequential read. It's just a more sophisticated form of sequential read which is actually used in virtually all sequential reads. Buffering also takes place on reads from a cd. It has to in order to prevent variations in device/system read speed from affecting playback. If some data bits are blocked by dirt on the disk, the byte those bits are part of becomes corrupted and possibly unreadable. In some cases error correction can use checksum numbers - the sum total of the value of all the bytes for some 'block' or unit of the data - to figure out the corrupted bytes. Often if there is enough missing the operating system cannot come up with a unique correction. Mike Weil is correct in saying that if you can clean off some dirt and make missing bytes resolvable, the resulting output is different.
Thanks. Please pm me your e-mail address and I will e-mail a few pics of the albums for you (and anyone else who is interested). I have always had problems posting pics on here so it would be easier for me to email photos.
All albums are open and have been played one or two times. All are in NM/NM condition. All records are flat and play perfectly fine with no issues.
Tor Lundvall’s “Yule” might not fit most people’s definition of Christmas music, but it sure is a cool looking LP.
And the inner liner has some of his yuletide drawings...