OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1141517192027

Comments

  • kyoto kidkyoto kid Posts: 40,561

     

    kyoto kid said:

    ...same her with my Titan-X.

    I WISH I had one of those, but they came out only after I bought my 1080! ; )

    ...you might be thinking of the Titan-Xp.  I have the Maxwell Titan-X.  a bit older but still a major improvement over rendering on the CPU.

  • ArtiniArtini Posts: 8,781

    If you want to be prepared for the new cards buy the charity Humble Bundle for Unity available for $15. Just search on Google for Humble Bundle.
    It includes Gaia terrain generator, awesome 3D models, music, tutorials and the other goodies for Unity.
    There are also a couple of games for Steam included with the bundle, if you pay $15 or more.
    The most important for me was the charity, they pick to support - Girls Who Code.
    We need more female programmers - it is still time to support them - 6 hours left.
    https://en.wikipedia.org/wiki/Humble_Bundle
    There is also another awesome Humble Bundle with books explaining programming for game engines as well.

     

  • Ghosty12Ghosty12 Posts: 1,979

    On some more news, word has it from Asus themselves that the lower end 2050 and 2060 will not be seen until next year..

  • nicsttnicstt Posts: 11,714
    drzap said:

    This article gives the most concise and simplest breakdown of RTX that I have found thus far.   It explains the role of each of the chipset cores in the rendering pipeline while also delving into the NVLink feature in the consumer cards.  It even gives some insight that might explain why the Volta cards are so much faster in iRay and other renderers.  Also take note of the issues raised in the comments below the article.  This guy seems to know his stuff. Good read.

    https://www.pcper.com/reviews/Graphics-Cards/Architecture-NVIDIAs-RTX-GPUs-Turing-Explored

    Whilst a good article; it largely tells us what Nvidia has already told us; I'm sceptical, until I can see hard data from multiple reviewers and of course, Daz Studio users.

  • ebergerlyebergerly Posts: 3,255
    Yeah, while they love to focus on hardware, dare I say it's in large part somewhat irrelevant. What really matters is to what extent all the complicated drivers and CUDA and APIs and software apps will actually take advantage of all that. And when. You have-to rewrite the software to tell it what the heck a tensor core is, and how best to use it for what you're doing. If its even relevant for what youre doing.
  • kyoto kidkyoto kid Posts: 40,561

    ..given Daz's relatively small development department, that all sounds like a tall order.

  • ebergerlyebergerly Posts: 3,255
    I dont think DAZ is the one to do all that.
  • kyoto kidkyoto kid Posts: 40,561

    ..you sure?  They may still need to implement such coding on their end as well for the embedded version of Iray.

  • ebergerlyebergerly Posts: 3,255
    kyoto kid said:

    ..you sure?  They may still need to implement such coding on their end as well for the embedded version of Iray.

    Well, yeah, they obviously need to write some very high-level code to interface Iray, and if there are any new high-level options (like "check this box to enable de-noising"), but as far as lower level stuff dealing with the hardware, I don't think so. I imagine at a DAZ Studio level, the hardware architecture of the GPU's is pretty much irrelevant. That's all handled by CUDA and Iray and the drivers and API's. 

  • kyoto kidkyoto kid Posts: 40,561
    edited September 2018

    ...used to code long ago, long before there was such a thing as a GPU and dedicated graphics language. (I do remember EGA and VGA cards).

    Before that we had to actually write code to produce a 3D image.

    Post edited by kyoto kid on
  • I am cautiously optimistic about the RTX series of cards. Having said that. I am very much enthusiastic about this type of technology. I have been doing 3d artwork for going on 30 years. I have been anxiously waiting for this technology since I first messed around with POV ray and 3D Studio. I am a gamer as well, so I am looking forward to the 2080Ti, provided that the performance tests show optimistic results. Even if I only use RTX on single player games that is fine. I am sure the new cards will handle 4k without RTX on and quite frankly I am still on a 1080p monitor anyway so doing RTX at 1080p is fine for me. What I really want to know is the performance in renders like Iray. That is what I can't wait to see. 

    PS. I understand being cautious about spending money but my goodness are there some serious negative Nancy's in here. Half of this thread sounds like my grandpa shouting "Get off my lawn you young whippersnappers." or "In my day we had to make our own radios before we could listen to music without winding the phonograph."

  • ebergerlyebergerly Posts: 3,255
    Keep in mind all this awesome new technology isn't all that new or awesome. GPUs have been around for decades, and the base technology for all of this was designed by grandpa's generation long ago. What we're seeing is incremental improvements on stuff that grandpa was working on years ago. So its likely that grandpa understands what's under the hood of all of this excitement (as kyoto kid said, years ago folks were actually working with the bits and bytes and really understood this stuff at a low level). So maybe the negativity is folks who understand the technology enough to know whats useless hype and whats fact and reason. I love this tech, and I'm presently reading 2 books on CUDA and doing GPU programming and writing a ray tracer from scratch. Sometimes negative is knowledge.
  • Ghosty12Ghosty12 Posts: 1,979

    And a couple of video reviews as well..

  • nicsttnicstt Posts: 11,714
    edited September 2018

    Very underwhelmed at this stage... And that is if I ignore the impressively high power consumption of the 2080ti.

     

    Post edited by nicstt on
  • I like nVidia for pushing raytracing in game engines. A realtime raytraced game in 1080p is better than a no-raytraced in 4k. The gpu power is at reach for 1080p realtime raytracing 40-60 FPS. But still experimental, game industry need 1-2 years develloping time to upgrade the game engines. 

     

    For Daz3d everything depends on the implementation of the tensor and raytracing cores in Iray. Without that, rtx is roughly 30% faster than gtx with same VRAM. No reason to upgrade/expensive. I still have a gtx970 and wanted to buy a 1080ti in the sell of. The rtx raytracing and tensor cores surprised me but without information from Daz3d and nVidia, how they will implement the new cores in their procucts I can not decide, what to buy. I will wait longer, I dont like to buy promises, prefer proved values. 

  • kyoto kidkyoto kid Posts: 40,561
    ebergerly said:
    Keep in mind all this awesome new technology isn't all that new or awesome. GPUs have been around for decades, and the base technology for all of this was designed by grandpa's generation long ago. What we're seeing is incremental improvements on stuff that grandpa was working on years ago. So its likely that grandpa understands what's under the hood of all of this excitement (as kyoto kid said, years ago folks were actually working with the bits and bytes and really understood this stuff at a low level). So maybe the negativity is folks who understand the technology enough to know whats useless hype and whats fact and reason. I love this tech, and I'm presently reading 2 books on CUDA and doing GPU programming and writing a ray tracer from scratch. Sometimes negative is knowledge.

    ..Oh I think it's fantastic that I can push pixels around with a cursor and sliders.  I dreamed of  such software decades ago when we still had to code everything line by line and never knew what we were going to get until it compiled. 

    My first 3D work was with pen plotters creating wireframe objects. When the college I was at got their first Tektronix green screen" vector display, that was "the bomb".

  • kyoto kidkyoto kid Posts: 40,561

    ...I'm pretty much in the camp with the middle review.  He does mention better performance for graphics workstations but as he mentions, for the gaming world there is little improvement for now until games embrace raytracing. The other two videos primarily deal with gaming benchmarks which tells me nothing as I am not into games. We really need reviews and benchmarks that focus on 3D graphic work and rendering, not just the big high end programmes like 3DS and Maya wither, but also render engines like Iray and Octane (he did make a reference to LuxRender as well, so I wonder did they improve Open CL performance?).  

    I read the review on Tom's this morning as well but really nothing much new there that I already didn't know and again, benchmarks were only for existing games.  I added a comment mentioning that for the 1,200$ price tag, why Nvidia didn't drop that last module of memory in to increase the VRAM to 12 instead of stay with 11?  The Titan-V is geared for a totally different type of use than the 2080 Ti so I actually don't see a conflict there (they actually should have increased the Titan-V's memory to 16 GB and it still would not have "over stepped" the P5000 or forthcoming RTX5000 as both cost less, and the Titan-V does not have NVLink compatibility while the RTX 5000 does). .

  • bluejauntebluejaunte Posts: 1,861

    Seems the raw 2080TI CUDA performance is pretty great. If only the price wasn't so high, it would be pretty decent even without the other fancy stuff. Let's hope we soon learn what Iray will do with the whole thing.

  • kyoto kidkyoto kid Posts: 40,561

    ...hopefully Nvidia won't drag their feet with the Iray drivers like with the Pascal cards and we can get some actual performance reports that are meaningful to us.

  • outrider42outrider42 Posts: 3,679
    edited September 2018
    ebergerly said:
    nicstt said:

    On the memory pooling there seems to be disagreement; RTX cards have the functionality, the software needs to be written to allow it. That seems to me like the cards have it. If it couldn't be implemented in software as the cards did not have the feature, then it would be correct to say they did not have memory pooling.

    Seems to me though that there is an argument about something we know nothing about as yet, except what Nvidia has chosen to tell us; I'd sooner wait.

    Don't assume that because the hardware capability exists, then the feature exists. There's two parts to the equation: Hardware and Software. Doing software to implement a feature can be very difficult and time-consuming. And maybe it's not something that some/many/most software companies catering to lower-end gaming users are willing to spend time and money to write software for. A lot depends on how the latest CUDA and the drivers are configured, and how easy they make it for developers to implement VRAM stacking. Instead of having, say, an entire scene located in each GPU's VRAM, you now have to configure it so that the scene information is spread across both GPUs' VRAM. And how do you handle "page migration" and "page faults", where you run out of VRAM? There's a lot of coordination and scheduling that needs to take place. And if gamers don't really care, why bother spending time developing software to implement it? 

    The feature exists. The hardware is capable of it, it is just a matter of the software getting done. Whether that is a tall order or not is irrelevant. It can be done, and that is all that matters. If OTOY and others can figure out a way to do it, they will absolutely do it. They have plenty of interested users who want the feature.

    If the feature was totally not possible, then all Tom had to do is say so. The simple fact that he left the door open is all that is needed. People with motivation have done all sorts of things with coding when given the very smallest of door openings. Sometimes they don't even need the door to be open, they just break in. Somebody will figure this out, whether it be a hacker or a whatnot.

    Back to benchmarks, there is a very small bench by guru3d that is slightly more relevant to daz users, they did some vray testing.

    https://www.guru3d.com/articles_pages/msi_geforce_rtx_2080_ti_gaming_x_trio_review,25.html

    This test does not even have a 1080ti in it, but the 2080ti is almost exactly twice as fast as the 1080 in this test. Sadly there is no information on how this test was done, and the tester states it was all done very quickly and last minute. The test only has the 2080ti, the 2080, the 1080, and ....a 1050??? The 1050 is totally random, where is the 1080ti?

    The 1080 renders the scene in 100 seconds. The 2080ti renders it in 51 seconds, and the 2080 in 69. We do have that Puget benchmark that showed a 1080ti rendering the test scene in 67 seconds, and the $3000 Titan V rendering it in 42 seconds, but we do not know what other parts these computers were using, or what test settings they may have used. Given that Puget's 1080ti was so much faster than the guru3D's 1080, there must be a lot more under the hood of that Puget system. It is frustrating that the two tests do not have a single GPU in common between them to give us any idea of how comparible they might be.

    Either way I think it is safe to conclude that vray is not properly using the RT or Tensor cores of the new cards.

    Post edited by outrider42 on
  • kyoto kidkyoto kid Posts: 40,561

    ...part of rendering speed also depends on the core count The 2080Ti has about 42% more CUDA cores than the 1080 and over five and a half times the number of the 1050 or 1050 Ti.  Of course it would smoke the 1050 just because of that by nearly a factor of 1:5 (which it almost does on the chart).

    Until RT is implemented for Iray, the same numbers will probably apply.

    As to short or tall order jobs, "tall", means more development resources will be required.  Easier to meet for a company like Autodesk or OToy which has a larger staff, not so much for one as small as Daz3d.

  • AalaAala Posts: 140
    kyoto kid said:

    ...part of rendering speed also depends on the core count The 2080Ti has about 42% more CUDA cores than the 1080 and over five and a half times the number of the 1050 or 1050 Ti.  Of course it would smoke the 1050 just because of that by nearly a factor of 1:5 (which it almost does on the chart).

    Until RT is implemented for Iray, the same numbers will probably apply.

    As to short or tall order jobs, "tall", means more development resources will be required.  Easier to meet for a company like Autodesk or OToy which has a larger staff, not so much for one as small as Daz3d.

    DAZ3D won't have to do it. The Iray devs will. DAZ3D will only have to integrate Turing-supported Iray on Daz Studio.

  • linvanchenelinvanchene Posts: 1,328
    edited September 2018

    @ No render engine update needed for Turing?

    At least for Octane it seems an update to just run the allready supported basic CUDA functionality of Turing cards may not be needed:

    "All new Turing cards should work just out of the box with any Octane version which already supports Volta (since 3.08 Windows and Linux and 4 RC2 on OSX).

    Optimizations will come later on once an official CUDA 10 is released."

    Source: https://render.otoy.com/forum/viewtopic.php?f=33&t=68810#p346140

    - - -

    @ Delivery Status:

    In Switzerland today  on 20. September RTX 2080 cards of many brands were shipped to customers who preordered.

    Some RTX 2080 Ti cards however are delayed until 27. September.

    It would be interesting to have some information from customers from other countries to get an idea how the situation looks overall.

    - - -

    @ NVLINK

    The RTX NVLINK Bridge can not be ordered through official Nvidia store channels from Switzerland and some other countries located in Europe.

    Nvidia stores in Europe are region locked and not allowed to ship to other countries. A country specific 3rd party partner store did not have any information on future availability.

    Based on some forum posts partners like EVGA may provide their own version for their interested customers.

    - - -

    - - -

    Edited:

    @ Iray Volta and Turing support

    Based on the release information Iray 2018.0.1, build 302800.1716 features Volta support.

    compare:

    https://www.daz3d.com/forums/discussion/comment/3794321/#Comment_3794321

     

    Maybe someone can confirm if Turing cards work with Iray on that build as well?

     

     

     

    Post edited by linvanchene on
  • Ghosty12Ghosty12 Posts: 1,979

    The other thing Microsoft are working on RTX support for Windows 10 in the upcoming October Update..

  • Just as an update for you all:

    We got an 2080RTX in the office today and gave it a try on the latest Daz Studio public beta and the latest drivers from Nvidia.

    Good news is that it works right out of the box. Currently the performance is pretty good but we don't believe that the hardware raytracing functionality has been enabled in the currently shipping version of Iray.

     

    In the attached test scene, with the 2080RTX in a older computer (compared to the machine with the 980Ti) here is what we found:

    980Ti: 10 minutes and 30 seconds

    2080RTX: 6 minutes and 3 seconds.

     

    I wasn't able to test with a 1080 or 1080Ti machine today but hopefully the attached scene file (which can be rendered with only free assets you get when you sign up on our site) will let you all judge for yourself.

    duf
    duf
    Iray Render Test 2.duf
    591K
  • ebergerlyebergerly Posts: 3,255

     

    The feature exists. The hardware is capable of it, it is just a matter of the software getting done. Whether that is a tall order or not is irrelevant. It can be done, and that is all that matters. If OTOY and others can figure out a way to do it, they will absolutely do it. They have plenty of interested users who want the feature.

     

    Regarding NVLink memory stacking in the GeForce RTX cards, I'm imagining the following hypothetical discussion at a software developer:

    Developer: "Hey, Boss, y'know those new RTX cards? The 2080 and 2080ti supposedly have that cool NVLink thing. Shouldn't we start working on making our software so it can stack memory ?"

    Boss: "That sounds cool, huh? Do they have any documentation to show how we'd write the software to do that? Is there anything in the new CUDA toolkit that shows what the new functions are so we can write the code?"

    Developer: "Well, not that I can see. I don't think they want to make it easy for us. They want to be able to charge more for the high end cards with those cool features"

    Boss: "Yeah, makes sense. So how much would one of our customers have to pay to buy two of those cards so they can memory stack?"

    Developer: "Well, looks like two 2080's would cost around $1,800 (with that NVLink connector), and two of the 2080tis would be around $2,500" 

    Boss: "What, are you kidding? How many of our customers are gonna go out and buy two of those things at those prices?"

    Developer: "Hold on, let me get my calculator...okay, hmm, carry the three, and multiply by this...okay boss, it looks like a grand total of maybe two of our customers. Maybe only one"

    Boss: "So you're actually asking me if I should pull developers off making those cool new features that will make thousands of our customers buy more stuff for their existing cards, and instead spend a bunch of money having them chase some feature that's not documented and we don't know how to write, just so a couple of customers can have a feature they may not even care about?" 

     

  • DAZ_Rawb, thanks for testing a new card and letting us know! Please keep the tests coming.

  • ebergerlyebergerly Posts: 3,255
    edited September 2018
    DAZ_Rawb said:

    Just as an update for you all:

    We got an 2080RTX in the office today and gave it a try on the latest Daz Studio public beta and the latest drivers from Nvidia.
     

    Cool, thanks ! Looks like those numbers show a 43% improvement in render times from a 980ti, and looking at the Sickleyield benchmark scene a 1080ti gives like a 33% improvement over a 980ti, so at this point a 2080 seems to be performing 10% better than a 1080ti, at a price that's about 30% more. 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,861
    DAZ_Rawb said:

    Just as an update for you all:

    We got an 2080RTX in the office today and gave it a try on the latest Daz Studio public beta and the latest drivers from Nvidia.

    Good news is that it works right out of the box. Currently the performance is pretty good but we don't believe that the hardware raytracing functionality has been enabled in the currently shipping version of Iray.

     

    In the attached test scene, with the 2080RTX in a older computer (compared to the machine with the 980Ti) here is what we found:

    980Ti: 10 minutes and 30 seconds

    2080RTX: 6 minutes and 3 seconds.

     

    I wasn't able to test with a 1080 or 1080Ti machine today but hopefully the attached scene file (which can be rendered with only free assets you get when you sign up on our site) will let you all judge for yourself.

    Rendered in 4 minutes 4 seconds on 2x 1080Ti. Just as a random reference.

Sign In or Register to comment.