GPEmu: A GPU emulator for rapid, low-cost deep learning prototyping [pdf]
71 points
1 day ago
| 5 comments
| vldb.org
| HN
mdaniel
23 hours ago
[-]
Sadly, no licensing in the repo and I have no idea what the licensing weight the associated fields in setup.py carry <https://github.com/mengwanguc/gpemu/blob/27e9534ee0c3d594030...>
reply
0points
17 hours ago
[-]
MIT is a well known foss license.
reply
devturn
16 hours ago
[-]
Nobody here is doubting that. Your parent comment said:

> I have no idea what the licensing weight the associated fields in setup.py carry

That's a valid concern. I had the same question myself.

reply
immibis
13 hours ago
[-]
The relevant weight is: if the author of the copyrighted work sues you in a court of law, will the evidence convince the judge that the author gave you permission to do so?
reply
IX-103
10 hours ago
[-]
That assumes you are willing to pay for lawyers. If not, the relevant weight is "will the author (or any subsequent copyright owners) sue you".
reply
socalgal2
21 hours ago
[-]
What is the difference between a gpu emulator and maybe specifically GPEmu and say llvmpipe?
reply
Voloskaya
14 hours ago
[-]
I was thinking about building something like this because this would be *very useful* if it worked well, so got excited for a sec, but this does not seem to be an active project, last commit 10 months ago.
reply
Retr0id
13 hours ago
[-]
Does it not work out more expensive to emulate a GPU vs just renting time on a real one?
reply
Voloskaya
12 hours ago
[-]
This isn't actually an emulator in the proper sense of the word. This does not give you correct outputs, but it will try to simulate the actual time it would take a real GPU to perform the series of operation you care about.

This could be useful e.g. for performance profiling, optimization etc.

reply
MangoToupe
12 hours ago
[-]
I imagine this is only true for high throughput loads. For development a full GPU is likely a waste.
reply
almostgotcaught
23 hours ago
[-]
> To emulate DL workloads without actual GPUs, we replace GPU- related steps (steps #3–5, and Step 2 if GPU-based) with simple sleep(T) calls, where T represents the projected time for each step.

This is a model (of GPU arch/system/runtime/etc) being used to feed downstream analysis. Pretty silly because if you're going to model these things (which are extremely difficult to model!) you should at least have real GPUs around to calibrate/recalibrate the model.

reply