Author:
Subject: General Tech
Manufacturer: Pengo

Overview

Recently, we got the opportunity to take a look at an interesting video capture device from a company called Pengo. While we had never heard of this company before, the promises of 4K 60Hz video capture at the price of $150 were too compelling to pass up.

Also, the Pengo 4K is a UVC capture device, which means that it uses the standard Microsoft video drivers, meaning it will work with any application capable of seeing camera input from a webcam and requires no additional software/drivers. Pengo also claims support for Mac OS and Linux with this device, although you would have to find software that knows how to deal with UVC devices.

From a design perspective, the Pengo 4K is quite simple. The device itself is made from aluminum and about the size of a deck of playing cards.

DSC05160.JPG

In addition to video capture, you can also use the Pengo as an audio input/output device through the audio connectors on the front.

DSC05161.JPG

Taking a look at the back of the Pengo, we can see my one major gripe with the device. Instead of using a proper port like MicroUSB or USB-C, the device ships with a Type-A to Type-A cable, which is actually against the USB specifications and will make finding a replacement cable, or a cable longer than the included cable (about 1 foot) difficult. 

pengo-obs.png

In this case, we used OBS to record footage from the Xbox One X using the Pengo 4K. Here, we can see that the Xbox is, in fact, capable of outputting full 4K 60Hz content to this capture card.

However, if you do some further investigation, we found that while the Pengo capture device ingests 4K footage, it is only actually capable of recording at 1080p 60Hz, meaning that it internally downsamples the footage.

While this still makes sense to some degree, allowing you to keep your console or PC in 4K for your local display while gaming, it's disappointing to see the capture functionality limited to 1080p. To be fair, the recording limitations of the Pengo are hidden on the specifications page, but overall it seems disingenuous to market this device heavily as "4K".

For anyone looking for an inexpensive, easy to use capture device, I would still recommend taking a look at the Pengo 4K HDMI Grabber. However, if you are looking for true 4K capture, this is not the device for you.

Podcast #509 - Threadripper 2950X/2990WX, Multiple QLC SSDs, and more!

Subject: General Tech | August 16, 2018 - 03:16 PM |
Tagged: xeon, video, Turning, Threadripper, ssd, Samsung, QLC, podcast, PA32UC, nvidia, nand, L1TF, Intel, DOOM Eternal, asus, amd, 660p, 2990wx, 2950x

PC Perspective Podcast #509 - 08/16/18

Join us this week for discussion on Modded Thinkpads, EVGA SuperNOVA PSUs, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Ken Addison, Alex Lustenberg

Program length: 1:35:10

Podcast topics of discussion:
  1. There is no 3
  2. Week in Review:
  3. News items of interest:
  4. Other stuff
  5. Picks of the Week:
  6. Closing/outro
 
 
Source:

6GHz across 32 cores, ThreadRipping mayhem

Subject: General Tech | August 16, 2018 - 02:28 PM |
Tagged: amd, threadripper 2, 2990wx, overclocking, LN2

The low cost workstation class 2990WX has been verified as running at 5.955GHz on an MSI MEG X399 Creation board, with the help of a lot of liquid nitrogen.  The Inquirer has links to the setup that Indonesian overclocker Ivan Cupa needed in order to manage this feat, which required fans to cool certain portions of the motherboard as well.  You are not likely to see this set up installed in a server room but the achievement is no less impressive as that is an incredible frequency to reach.  Check it out in all it's glory.

9ecfyq.png

"So far, it would seem that AMD is on top when it comes to willy-waving, though it's worth noting that overclocked performance is a tad nebulous and real-world in-app performance is really where choosing an Intel or AMD chip comes to play."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer

Rage against the remake; Jagged Alliance exhumed once again

Subject: General Tech | August 15, 2018 - 03:45 PM |
Tagged: gaming, jagged alliance rage

The first and second Jagged Alliance games, and to an extent the add-on to JA2 were incredible games for those that liked turn based tactical shooters.  From there it was all downhill as the original idea was corrupted into Jagged Alliance Online and the KickStarted Jagged Alliance: Flashback.  The devs behind JA Online are now working on Jagged Alliance: Rage!, which puts you in control of some aged mercenaries susceptible to infections and permanent injuries.  That mechanic is new and might indicate there is hope for the game yet, especially at the $20 price tag that has been chosen.  Take a peek at the announcement video over at Rock, Paper, SHOTGUN.

70.jpg

"In Jagged Alliance: Rage! you are constantly on the brink of breakdown. Badly equipped and outnumbered, it’s up to the player to lead their seasoned mercenaries in tactical turn-based missions and to light the spark of a revolution."

Here is some more Tech News from around the web:

Tech Talk

 

The biggest little storehouse in Texas ... terabytes on gumsticks

Subject: General Tech | August 15, 2018 - 02:42 PM |
Tagged: SK Hynix, Terabyte, toshiba, QLC NAND

This year at the Flash Memory Summit big is in as Toshiba unveils an 85TB 2.5" SSH and suggested a 20TB M.2 drive is not far off.  SK Hynix will release a 64TB 2.5" SSD with a 1Tbit die size which analysts expect to offer somewhat improved reads and writes compared o their previous offerings.  The two companies will be using 96-layer QLC 3D NAND in these drives and The Register expects we will see them use an NVMe interface as opposed to SATA.  Check out the story for more detail on these drives as well as what Intel is working on.

index.jpg

"The Flash Memory Summit saw two landmark capacity announcements centred on 96-layer QLC (4bits/cell) flash that seemingly herald a coming virtual abolition of workstation and server read-intensive flash capacity constraints."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

BAPCo Launches SYSMark 2018 Benchmarking Suite for PCs

Subject: General Tech | August 15, 2018 - 10:59 AM |
Tagged: sysmark, sysmark 2018, bapco, benchmarks

SYSMark is an application-based benchmarking suite used by many PC OEMs and enterprises to evaluate hardware deployments, as well as by us here at PC Perspective to evaluate system performance.

SMEXP.png

By using a variety of widely used applications such as  Microsoft Office and the Adobe Creative Suite, SYSMark can provide insight into the performance levels of typical user activities like Productivity, which can be difficult to quantify otherwise.

As part of the upgrade to SYSMark 2018, the applications used to test are updated as well, including Microsoft Office 2016, Google Chrome version 65, Adobe Acrobat Pro DC, Adobe Photoshop CC (2018), Cyberlink PowerDirector 15, Adobe Lightroom Classic CC, AutoIT 3.3.14.2.

SYSMark 2018 is available today from BAPCo's online store.

Source: BAPCo

NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018

Subject: General Tech | August 13, 2018 - 07:43 PM |
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia

Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.

NVIDIA_Turing_Architecture_1534184555.png

At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.

The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.

Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.

Volta-Tensor-Core.jpg

Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.

In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.

quadro-rtx-lineup.png

Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.

In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany

PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!

Source: NVIDIA

Coffee Lake S will be released along with the pumpkin spice

Subject: General Tech | August 13, 2018 - 01:49 PM |
Tagged: Intel, rumour, release, coffee lake s, i9-9900K, i5-9600K, i7-9700K

According to the various sources The Inquirer has, the Coffee Lake refresh will be launched on the first of October, in time to ensure systems builders have models ready for the holidays.  This new processor does not offer a compelling upgrade for those with a modern system, as it is very similar to it's predecessor.  If you have something a little older however, the three new processors offer increased frequencies and core counts, the 9900K sports a default Boost Clock of 5GHz, which is nothing to sneeze at.

118.png

"If you were expecting anything bigger then allow us to disappoint you as, really the ninth-gen chips are mild upgrades on their predecessors, unless Intel has been keeping something very well hidden up its corporate sleeves."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

Intro and NNEF 1.0 Finalization

SIGGRAPH 2018 is a huge computer graphics expo that occurs in a seemingly random host city around North America. (Asia has a sister event, called SIGGRAPH Asia, which likewise shuffles around.) In the last twenty years, the North American SIGGRAPH seems to like Los Angeles, which hosted the event nine times over that period, but Vancouver won out this year. As you would expect, the maintainers of OpenGL and Vulkan are there, and they have a lot to talk about.

In summary:

  • NNEF 1.0 has been finalized and released!
  • The first public demo of OpenXR is available and on the show floor.
  • glTF Texture Transmission Extension is being discussed.
  • OpenCL Ecosystem Roadmap is being discussed.
  • Khronos Educators Program has launched.

I will go through each of these points. Feel free to skip around between the sections that interest you!

Read on to see NNEF or see page 2 for the rest!

Blender Benchmark / Blender Open Data Announced

Subject: General Tech | August 10, 2018 - 11:17 PM |
Tagged: Blender, benchmark

The Blender Foundation is wrapping up development on Blender 2.8, “The Workflow Update”. We have been following it for a while, but today’s announcement caught me by surprise: a benchmark database. It seems simple, right? Blender wants its users to know what hardware is best to use, especially when rendering images in Cycles (which can be damn slow).

blender-2018-a-bit-lopsided-benchmark.png

A bit lopsided...

The solution is to make a version of Blender that creates and validates benchmarks, then compiles the data on their website. It’s still early days for this, with just 2052 entries (at the time of writing) and the  majority of those were from Linux boxes. Also, they only break it down into a handful of categories: Fastest CPU, Fastest Compute Device, Submissions Per OS, then a few charts that compare the individual benchmark scenes against one another in a hardware-agnostic fashion. They pledge to add a lot of more metrics in the future.

Personally, I’m curious to see a performance vs OS metric. Some benchmarks back from 2016 (Blender 2.77 on an EVGA GTX 980 Ti) show Linux out-performing Windows 10 by over 2x, with Windows 7 landing in between (closer to Linux than Windows 10). At the time, it was attributed to NVIDIA’s CUDA driver being horribly optimized for the newer OS, which seems to be validated by the close showing of the GTX 1080 on Windows 10 and Linux, but I would like to see a compiled list of up-to-date results. I could soon be able to.