![]() If multiple client instances are indeed an element to solve processor affinity of GPU feeder processes, then I would go a step further and have those two client instances for GPU projects and a third client instance for CPU projects. Process Lasso then needs to be instructed that all new processes launched from one client shall be bound to CPU0, and all new processes launched from the other client go to CPU1. Maybe this is possible if you have separate boinc-client instances for this. Notably I wonder whether it is possible at all to have Process Lasso detect without your manual intervention which GPU feeder processes should run on CPU0, and which one should run on CPU1. I haven't tried it myself yet, and haven't researched its precise capabilities. In order to control processor affinity on Windows, I have several times read people recommending Process Lasso, But I am not aware of direct support for processor affinity in boinc-client (and boincmgr, boinccmd, boinctasks etc.), which means you need an external tool. So you'd like to configure processor affinity of GPU feeder processes. However, GPU-GPU communication is not required in any Distributed Computing project, as far as I know.) I suppose these are simply boards with PCIe switches on them. (By the way, some server mainboard makers offer special "single PCIe root" boards for multi GPU computing applications in which the GPUs need to communicate with each other. ![]() ![]() Otherwise, such DMA would involve both CPUs, and the QPI link between them. ![]() That way, DMA to/from the GPU stays local to this CPU. I think the underlying problem which you want to solve is that a GPU feeder process should allocate memory local to that CPU to which the GPU is attached. (Speaking theoretically, not out of own experience.) ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |