Table of Contents
Good observation! And I'm not going to change this. The Python implementation is thought to provide a correct, stable and easy to read code bases for further experiments. The right place to do performance tuning is the generated C code. I'm happy to consider every suggestion to improve the generated code.
This depends. For occasional computations on small amounts of data the optimized bit-by-bit implementation might be a feasable solution and might also have a speed which is comparable to other models. On desktop computers, with larger amounts of data, the table driven model is a good choice, but on embedded platforms, where code space is a major concern, a optimized bit_by_bit implementation might be the better choice.
Parameters | Width | Platform | Data quantity | Possible algorithms |
---|---|---|---|---|
Variable, Fixed | 1-16 bits | Embedded, Desktop | Low | bit_by_bit, bit_by_bit_fast |
Fixed | 8, 16 bits | Embedded, Desktop | Medium, Low | bit_by_bit_fast, table_driven (table index width: 4) |
Fixed | 8 or more bits | Desktop | Medium, High | table_driven |
You. I have added a special exception to the copyright statement which permits the use of pycrc#s output in larger works and to distribute that code under terms of your choice.
If you decide to include pycrc's output in your work, I would appreciate a short mention od it in your sorce code or documentation.