How To Run Octane Server
- Octane Render plugin for DAZ Studio by Otoy. The service will be first live in the US for testing in fall. By the end of 2013 other regions like Europe and Asia are planned to have access. Speculation based on information released on the internet / the Otoy forums: So if you just need a render farm for your next project it is rumored to only cost 1$ per GPU per hour.
- Octane render benchmark-testing results for 4x GTX 1080 server. Attention: due to the newly amended License for Customer Use of Nvidia GeForce Sofware, the GPUs presented in the benchmark (GTX 1080, GTX 1080 TI) can not be used for training neural networks.(except blockchain processing).
@JC @Azen - would you mind providing a step-by-step guide for how to install the V4 server, Blender plugin, Standalone without overwriting.
If You happen to check OctaneBench result page you probably noticed systems on top of the list scoring four digits, well over 1000 (as a reference 100 being the performance of single 980). You might be left wondering “how is that even possible?”. Well, that’s our theme for this article.
Splitters, ribbon cables, clusters, servers, …
Most of high end results are reached with server grade hardware & eight or more Kepler or Maxwell based GPUs (most likely hooked to host system via splitters or GPU clusters). Performance of new Pascal GPUs 1080s in OctaneRender are yet to be seen, but based on SP numbers we might be able to reach same results with 5-6 GPUs.
Let’s start from backbone, piece that holds entire system together. High-end consumer (gaming/workstation) motherboards have four physical PCIe 16x slots, most likely working 8x if all of those are populated at the same time. Usually spaced out to accommodate four dual slot GPU cards & unless You’re using cards like TitanZ (having two chips under the hood), it seems there’s now way to fit more than 4 GPUs into those motherboards. Even then four TitanZ without watercooling couldn’t be placed directly due to triple slot cooler. Feel free to read more about TitanZ if You’re interested & on the bottom of the article You’ll find more information about 4x TitanZ based rig & solutiones that enable it.
One of the other ways to add more cards would be by using something like GPU-Oriented Splitters, PCIe Expansion Clusters, or external enclosures plugged into one of PCI slots. All of those solutions come in different shapes, interface speeds & range from few hundred to few thousand (without GPUs).
To cover this wide topic I’m starting an entirely new section about external GPUs on my website where You’ll be able to find more information from commercial “off-the-shelf” solutions to custom tailor built boxes .
As an alternative would be to get motherboard that simply has more PCIe slots. Most of these (server oriented) motherboards will have pair of CPUs (at the moment Octane Render would not benefit from anything more than fast quad core), but You can also find some single CPU options (like Asus X99E WS or Asrock X99 WS E), that have pair of PLX chips under the hood to allow up to 8 lanes to each of seven GPUs (or any other PCIe devices like fast PCIe storage). However having more slots doesn’t solve problem right away.
Physical limitations and other ramifications
Physical limitations don’t go anywhere, You are still constrained by the same space for up to four dual slot GPUs on (E)ATX standard motherboard. Using PCIe ribbon cables to displace GPUs would be one of the ways to plug more than 4 cards. Such a mining rig as a concept might not fit to most due to its open nature, exposed wires (but it’s probably one of the cheapest ways to expand).
Worth to mention ribbon risers also might introduce some problems. As it is an extra element in the path of very sensitive signal, & it might expose those signals to magnetic fields causing stability issues, crashes, etc. (we’ve covered this topic loosely in “Going on air wide open” article featuring Polder Animation rig). Expensive PCIe ribbon cables (that could cost You up to 100$ per piece) with extra shielding & good contacts are recommended to get the best stability, but cheap options (for less than 10$) do work for some as well.
External GPU route might seem to be better choice, but in most cases (if you look to x16 Gen3 interface) it’s not entirely the cheapest choice & as in the case with server units, You would still end up losing performance..
Losing performance? What do You mean?
Explanation here is very simple. Whether You go on with computer full of cards or any eGPU box fully filled, You likely going to have GPUs placed side by side in close proximity.
Inside most cases heat output would reach levels (by default set around 85C) were GPUs would start throttling down to prevent from damaging effects & due to this inbuilt functionality, causing them to run slower, You would end up losing rendering speed.
Want any prove? Look at OctaneBench results of 7x 980Ti compared to 8x or oven 9x – they are almost the same! Performance difference in builds like this could be counted in cards. Imagine if You end up spending 1000EU per GPU (like TitanX) & 2 out of 9 cards would end up sitting doing nothing!..
You might say, but it’s just a few results, these could be rigged.. OK, let’s take look at quad 980Ti results (60 submitted so far). OctaneBench score for this configuration scales from 413 to 570. That’s a lot of a difference if You ask me.
Other things to keep in mind
Raising cards from multi GPU capable motherboard is sort of a cheap option (we’ll talk more about DIY open cases inspired by miners in the future), but as already mentioned just remember that quality of cables might cause stability issues.
Splitters is another option & You can save on motherboard (but be careful with bios limitations as not all motherboards allow a lot of GPUs to be seen. Keep in mind slower interface of some risers or splitters might cause some issues for certain workflows leaving Your GPUs under-utilised or even causing a system crashes.
Dedicated servers have fast interface & usually a backplane with much more physical space (sometimes in between of cards as well), but also due to mentioned advantages they are huge, loud & not to forget far from being cheap. Most eGPU boxes suffer from same harsh issues.
When You think about that, it’s understandable in a way, considering that those on the market for such solutions rarely care about noise. They use to keep computers in separate rooms with good noise insulations & climate control) & prefer rack-mount compatible form factor.
What is the best solution for freelancer, small studio?
Well, it’s Your choice in the end, all mentioned routes do work. If You have a need for such render power, try to think what are Your preferences (value, size, stability, silence, etc) & on the other hand what are compromises You can tolerate (spending more, living with noisy system, sacrificing some speed, longevity of cards, stability…).
We will get much more in depth about different routes in future, but today, let me leave You with something I would call… The Best Solution I’ve seen so far (been dreaming about this since I saw Asus P6T7 WS, EVGA Single Slot GTX 580s & this shot).
Here are few important aspects:
* no compromise – rock solid stable,
* built with value in mind,
* high performance, scoring ~1000 in terms of OctaneBench,
* coming in one piece (that has no messy wires or cards exposed),
* running silent (not like a vacuum cleaner & hairdryer, read “typical server oriented enclosure”),
* operating at way lower temperatures (helping components to last longer under extended workloads).
Meet 7 Single Slot GPU equipped workstation
Lately I’ve been contacted by someOne looking to get quad GPU system with watercooling. After short conversation we came to interesting conclusion:
7GPU rig would cost ~30% more while in return offering an additional 75% of raw performance.
Needless to say, value proposition was too good to be dismissed & that’s how this build started.
With most parts being more or less known there was still the question how to make 7GPU to fit into motherboard (in this case ASUS X99 E WS). One option was to get EVGA K NGP N 980Ti, that as other K NGP N cards come in single slot ready form factor when watercooled (with all connectors on back I/O shield being on a single row). Downside of getting these GPUs was the price: ~250EU more per unit + waterblocks were more expensive due to sort of limited run. All expenses taken into account about 2k Euro more for entire system.
Main downside for choosing to modify reference 980Ti, by “simply” cutting DVI port on the second row, is that You would would loose all warranties (but…saving money). With this build being focused more on value, second option has been favoured. That’s how this powerhouse end up looking:
The rest of this build is pretty simple: 6 core 40 lane CPU, 64GB of DDR4 RAM, some SSDs/HDDs, EVGA 1600W PSU & huge 900D case by Corsair. Watercooling part list is too long to get into every detail, but it’s worth to mention Revo Dual pump top with pair D5 pumps from EK for redundancy, sufficient water pressure & flow. 7 waterblocks from Aquacomputer for GM200 chip based GPUs (it’s written TitanX, but that’s due to the fact waterblock for TitanX fit perfectly to reference 980Ti PCBs – technically those are nearly identical GPUs with main difference being amount of VRAM). One of the key elements was that 7GPU bridge from Aquacomputer. The rest to mention would be equivalent of 12x 120mm radiator space with total of 18 fans & all of this is controlled by Aquacomputer Aquaero 6, automatically adjusting fan speeds based on water temperature.
Conclusion
Without getting too much into geeky details let’s try to sum up this build.
It comes in one box, runs silent (fans even under full load hardly spin over 1100RPM) keeping temperatures for 7 GPUs below 50C while entire system keeps drawing up to 1350W from wall & giving back close to 1000 in terms of OctaneBench.
This is a music for my ears & definitely The Best Workstation for Octane Render I’ve seen so far!
More details to come soon with tons of beautiful photos from Sebastian, who in the end made this crazy idea to reality & assembled this monster. I’ll post a link to the build log here & add few more shots later. Stay tuned!
The XL Release ALM Octane plugin enables XL Release tasks to interact with an ALM Octane server.
Features
Using the ALM Octane plugin for XL Release, you can perform the following tasks to interact with an ALM Octane server:
Prerequisites
In ALM Octane, define API access with the team member role for the plugin. Save the client ID and client secret for use when connecting to an ALM Octane server in XL Release.
Set up and connect to an ALM Octane server
To set up and connect to an ALM Octane server:
In XL Release, go to Settings > Shared configuration and click Add Workspace under ALM Octane.
In the Title box, enter any name for the ALM Octane configuration.
In the Server box, enter the URL for the ALM Octane server, including the IP address and the port.
In the Client ID and Client Secret boxes, enter the ALM Octane client ID and secret.
In the Space ID and Workspace ID boxes, enter the ALM Octane corresponding IDs.
Click Save to save the customization.
ALM Octane: Create Defect
The ALM Octane: Create Defect task type creates defects in ALM Octane.
The plugin creates the defects:
Octane Render Standalone
In the root of the ALM Octane backlog tree.
As drafts. This means that when opening up these defects in ALM Octane, you must enter values for mandatory fields.
With an origin of XL Release, so the source of the defect is clear.
The following properties are available:
Workspace: The workspace in which to create the defect, based on the connected shared configuration. Mandatory.
Defect Name: The defect name. Mandatory.
Release Name: The ALM Octane release. Mandatory.
Defect Description: The defect description.
The output of the task includes:
Defect ID. You can enter the name of a variable in which to store the ID of the ALM Octane defect being created.
Defect URL. You can enter the name of a variable in which to store the URL for accessing the defect in ALM Octane.
ALM Octane: Gate
The ALM Octane: Gate task type executes a query on an ALM Octane server to retrieve a list of defects and their names.
The following properties are available:
Workspace: The workspace in which to create the defect, based on the connected shared configuration. Mandatory.
Release Name: The ALM Octane release. Mandatory.
Query Defect Phases: A query that finds the defects that have the selected phases.
Query Defect Severity: A query that finds the defects that have the selected severity.
Threshold: A number that represents the number of defects that can be tolerated for subsequent tasks to run.
Threshold Operator: The operator that defines if the number of defects is acceptable compared to the Threshold.
The output of the task is:
- A list of defects displayed in ALM Octane.
Redshift Vs Octane
Sample scenario
You can create an ALM Octane: Create Defect task that creates a defect if a preconditon indicates that there are defects of high severity.
You can create an ALM Octane: Gate task that will not let any subsequent tasks run if a high severity defect was opened.
Comments are closed.