Re: OpenMW Mod Manager
Posted: 08 Jan 2019, 04:34
It was the same issue as https://github.com/niftools/nifskope/pull/147: I'm using gcc 8.2 and Qt5 5.10. It might be useful to list dependencies in the readme.
I've got it working now using the change in the pull request.
With 185 files being processed I'm getting a runtime of about 79 seconds. Not terrible if you only run it once after installing a bunch of mods, but in the context of installing them one at a time (if, for example, you wanted to test compatibility of each one before proceeding), it's pretty slow.jmelesky wrote: ↑07 Jan 2019, 21:21What kind of times are you seeing, and with what length of mod lists? I guess I'm also curious what the more common use case is: add a bunch of mods occasionally, or add mods one at a time frequently? If it's the former, I'm inclined away from spending much time optimizing.
By comparison just running a sha512sum on all those files takes about 1.7 seconds (note ssd, and relevance below).
The trouble with converting it to not-python (also not fond of c++. Rust maybe? I like Rust), is that even if the result is an order of magnitude faster, it still may not scale well, seeing as the issue is caused by lots of mods. Then again, I ran a profile of it and the bulk of the work is being done in readSubRecord, almost certainly the string slices, since apparently Python does-slice by-copy, i.e. ever time you make a slice of a string in readSubRecord it's creating a new string.
Using a bytearray and consuming the processed elements rather than returning a slice of the remaining data got the total time down to 25 seconds (16 when compiled with cython). Still not amazing, but much better for the trivial change. I also checked and the two resulting files were identical save for the list of mods in the DESC, which had a different order.
Not-python could admittedly do better still by using references instead of slices for all the operations in readSubRecord, but it would be hard to tell how much better without trying it.
Relevant lines from the profile (97.996 seconds when profiling):
After the modification (37.913 seconds when profiling):
Code: Select all
ncalls tottime percall cumtime percall filename:lineno(function) 14377555 75.323 0.000 81.510 0.000 omwllf.py:103(readSubRecord) 220616 11.396 0.000 96.510 0.000 omwllf.py:111(readRecords) 185 1.343 0.007 97.854 0.529 omwllf.py:138(getRecords)
Virtually all the time is still being spent in getRecords, but substantially less in total.
Code: Select all
ncalls tottime percall cumtime percall filename:lineno(function) 14377555 17.016 0.000 24.080 0.000 omwllf.py:103(readSubRecord) 220616 8.728 0.000 36.079 0.000 omwllf.py:112(readRecords) 185 1.353 0.007 37.432 0.202 omwllf.py:138(getRecords)
Basically this is what I would suggest to improve on the above change without rewriting in a different language. Write a file containing the files used to build the merged levelled lists that contains their names and hashes (e.g. with sha512sum) and compare with the files you collect every time the mod is run. Then you'd know if a file has been removed, modified or added and can update the merged file to reflect that. What might also be worth trying is writing a cache of the records in a more python friendly format for each file (marshal is apparently quite fast) and import those for the files that haven't changed, given that reading all the records is the bottleneck.
Still though, you might be right that using a language other than python (specifically, where we can avoid costly copying of strings) is the best way forward (if any); the time for everything other than getRecords is very small, and efficiently written the overhead for that shouldn't be much more than the i/o cost (which is also necessary if caching the records).
Pull request for the small change from above incoming.