Recently, we got the opportunity to take a look at an interesting video capture device from a company called Pengo. While we had never heard of this company before, the promises of 4K 60Hz video capture at the price of $150 were too compelling to pass up.
Also, the Pengo 4K is a UVC capture device, which means that it uses the standard Microsoft video drivers, meaning it will work with any application capable of seeing camera input from a webcam and requires no additional software/drivers. Pengo also claims support for Mac OS and Linux with this device, although you would have to find software that knows how to deal with UVC devices.
From a design perspective, the Pengo 4K is quite simple. The device itself is made from aluminum and about the size of a deck of playing cards.
In addition to video capture, you can also use the Pengo as an audio input/output device through the audio connectors on the front.
Taking a look at the back of the Pengo, we can see my one major gripe with the device. Instead of using a proper port like MicroUSB or USB-C, the device ships with a Type-A to Type-A cable, which is actually against the USB specifications and will make finding a replacement cable, or a cable longer than the included cable (about 1 foot) difficult.
In this case, we used OBS to record footage from the Xbox One X using the Pengo 4K. Here, we can see that the Xbox is, in fact, capable of outputting full 4K 60Hz content to this capture card.
However, if you do some further investigation, we found that while the Pengo capture device ingests 4K footage, it is only actually capable of recording at 1080p 60Hz, meaning that it internally downsamples the footage.
While this still makes sense to some degree, allowing you to keep your console or PC in 4K for your local display while gaming, it's disappointing to see the capture functionality limited to 1080p. To be fair, the recording limitations of the Pengo are hidden on the specifications page, but overall it seems disingenuous to market this device heavily as "4K".
For anyone looking for an inexpensive, easy to use capture device, I would still recommend taking a look at the Pengo 4K HDMI Grabber. However, if you are looking for true 4K capture, this is not the device for you.
Subject: General Tech | August 16, 2018 - 03:16 PM | Alex Lustenberg
Tagged: xeon, video, Turning, Threadripper, ssd, Samsung, QLC, podcast, PA32UC, nvidia, nand, L1TF, Intel, DOOM Eternal, asus, amd, 660p, 2990wx, 2950x
PC Perspective Podcast #509 - 08/16/18
Join us this week for discussion on Modded Thinkpads, EVGA SuperNOVA PSUs, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano
Peanut Gallery: Ken Addison, Alex Lustenberg
Program length: 1:35:10
There is no 3
Week in Review:
News items of interest:
Picks of the Week:
Subject: General Tech | August 16, 2018 - 02:28 PM | Jeremy Hellstrom
Tagged: amd, threadripper 2, 2990wx, overclocking, LN2
The low cost workstation class 2990WX has been verified as running at 5.955GHz on an MSI MEG X399 Creation board, with the help of a lot of liquid nitrogen. The Inquirer has links to the setup that Indonesian overclocker Ivan Cupa needed in order to manage this feat, which required fans to cool certain portions of the motherboard as well. You are not likely to see this set up installed in a server room but the achievement is no less impressive as that is an incredible frequency to reach. Check it out in all it's glory.
"So far, it would seem that AMD is on top when it comes to willy-waving, though it's worth noting that overclocked performance is a tad nebulous and real-world in-app performance is really where choosing an Intel or AMD chip comes to play."
Here is some more Tech News from around the web:
- TSMC sees pickup in orders for mining ASICs @ DigiTimes
- ARM takes aim at Intel with its laptop-class processor ambitions @ The Inquirer
- Foreshadow and Intel SGX software attestation: 'The whole trust model collapses' @ The Register
- Intel’s 10nm Cannon Lake chip gets another outing in new NUC mini PC @ Ars Technica
- IoT shouters Chirp get themselves added to Microsoft Azure IoT @ The Register
- What Are the Best CCleaner Alternatives? @ TechSpot
- Cougar Armor S Gaming Chair @ TechPowerUp
- NikKTech & 1MORE Feel The Sound European Giveaway
Subject: General Tech | August 15, 2018 - 03:45 PM | Jeremy Hellstrom
Tagged: gaming, jagged alliance rage
The first and second Jagged Alliance games, and to an extent the add-on to JA2 were incredible games for those that liked turn based tactical shooters. From there it was all downhill as the original idea was corrupted into Jagged Alliance Online and the KickStarted Jagged Alliance: Flashback. The devs behind JA Online are now working on Jagged Alliance: Rage!, which puts you in control of some aged mercenaries susceptible to infections and permanent injuries. That mechanic is new and might indicate there is hope for the game yet, especially at the $20 price tag that has been chosen. Take a peek at the announcement video over at Rock, Paper, SHOTGUN.
"In Jagged Alliance: Rage! you are constantly on the brink of breakdown. Badly equipped and outnumbered, it’s up to the player to lead their seasoned mercenaries in tactical turn-based missions and to light the spark of a revolution."
Here is some more Tech News from around the web:
- Doom Eternal embraces its retro roots in tons of new QuakeCon footage @ Rock, Paper, SHOTGUN
- Monster Hunter: World Benchmark Performance Analysis @ TechPowerUp
- Frag for free as Quake Champions drops its initial entry fee @ Rock, Paper, SHOTGUN
- The hottest new board games from Gen Con 2018 @ Ars Technica
- Total War: Rome 2 gets prettied up, expanded and sprouts some family trees @ Rock, Paper, SHOTGUN
- How Many FPS Do You Need? @ TechSpot
- Fallout 76 turns the gaming trolls into targets @ HEXUS
- Wot I Think - Phantom Doctrine @ Rock, Paper, SHOTGUN
Subject: General Tech | August 15, 2018 - 02:42 PM | Jeremy Hellstrom
Tagged: SK Hynix, Terabyte, toshiba, QLC NAND
This year at the Flash Memory Summit big is in as Toshiba unveils an 85TB 2.5" SSH and suggested a 20TB M.2 drive is not far off. SK Hynix will release a 64TB 2.5" SSD with a 1Tbit die size which analysts expect to offer somewhat improved reads and writes compared o their previous offerings. The two companies will be using 96-layer QLC 3D NAND in these drives and The Register expects we will see them use an NVMe interface as opposed to SATA. Check out the story for more detail on these drives as well as what Intel is working on.
"The Flash Memory Summit saw two landmark capacity announcements centred on 96-layer QLC (4bits/cell) flash that seemingly herald a coming virtual abolition of workstation and server read-intensive flash capacity constraints."
Here is some more Tech News from around the web:
- John McAfee lashes out at Bitfi 'hackers' @ The Inquirer
- A Community-Run ISP Is the Highest Rated Broadband Company In America @ Slashdot
- Intel finally emits Puma 1Gbps modem fixes – just as new ping-of-death bug emerges @ The Register
- Bitcoin Sinks Below $6,000 as Almost Everything Crypto Tumbles @ Slashdot
- Intel to launch X599 platform for its 28-core Skylake-X CPU @ The Inquirer
- The Ars Technica Back to School buying guide
- It's official: TLS 1.3 approved as standard while spies weep @ The Register
- Three more data-leaking security holes found in Intel chips as designers swap security for speed @ The Register
- An Early Look At The L1 Terminal Fault "L1TF" Performance Impact On Virtual Machines (Foreshadow) @ Phoronix
Subject: General Tech | August 15, 2018 - 10:59 AM | Ken Addison
Tagged: sysmark, sysmark 2018, bapco, benchmarks
SYSMark is an application-based benchmarking suite used by many PC OEMs and enterprises to evaluate hardware deployments, as well as by us here at PC Perspective to evaluate system performance.
By using a variety of widely used applications such as Microsoft Office and the Adobe Creative Suite, SYSMark can provide insight into the performance levels of typical user activities like Productivity, which can be difficult to quantify otherwise.
As part of the upgrade to SYSMark 2018, the applications used to test are updated as well, including Microsoft Office 2016, Google Chrome version 65, Adobe Acrobat Pro DC, Adobe Photoshop CC (2018), Cyberlink PowerDirector 15, Adobe Lightroom Classic CC, AutoIT 18.104.22.168.
SYSMark 2018 is available today from BAPCo's online store.
Subject: General Tech | August 13, 2018 - 07:43 PM | Ken Addison
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia
Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.
At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.
The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.
Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.
Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.
In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.
Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.
In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany.
PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!
Subject: General Tech | August 13, 2018 - 01:49 PM | Jeremy Hellstrom
Tagged: Intel, rumour, release, coffee lake s, i9-9900K, i5-9600K, i7-9700K
According to the various sources The Inquirer has, the Coffee Lake refresh will be launched on the first of October, in time to ensure systems builders have models ready for the holidays. This new processor does not offer a compelling upgrade for those with a modern system, as it is very similar to it's predecessor. If you have something a little older however, the three new processors offer increased frequencies and core counts, the 9900K sports a default Boost Clock of 5GHz, which is nothing to sneeze at.
"If you were expecting anything bigger then allow us to disappoint you as, really the ninth-gen chips are mild upgrades on their predecessors, unless Intel has been keeping something very well hidden up its corporate sleeves."
Here is some more Tech News from around the web:
- Intel hands first Optane DIMM to Google, where it'll collect dust until a supporting CPU arrives @The Register
- Android Pie is borking fast charging on some Pixel XL handsets @ The Inquirer
- Many Google Services on Android Devices and iPhones Store Location Data, Even if Location Sharing is Disabled From Privacy Settings @ Slashdot
- The off-brand 'military-grade' x86 processors, in the library, with the root-granting 'backdoor' @ The Register
- NETGEAR Orbi (RBK23) AC2200 Mesh Wi-Fi System @ Kitguru
- Reolink Argus 2 Wire-Free 1080p Security Camera Review @ NikKTech
Intro and NNEF 1.0 Finalization
SIGGRAPH 2018 is a huge computer graphics expo that occurs in a seemingly random host city around North America. (Asia has a sister event, called SIGGRAPH Asia, which likewise shuffles around.) In the last twenty years, the North American SIGGRAPH seems to like Los Angeles, which hosted the event nine times over that period, but Vancouver won out this year. As you would expect, the maintainers of OpenGL and Vulkan are there, and they have a lot to talk about.
- NNEF 1.0 has been finalized and released!
- The first public demo of OpenXR is available and on the show floor.
- glTF Texture Transmission Extension is being discussed.
- OpenCL Ecosystem Roadmap is being discussed.
- Khronos Educators Program has launched.
I will go through each of these points. Feel free to skip around between the sections that interest you!
Subject: General Tech | August 10, 2018 - 11:17 PM | Scott Michaud
Tagged: Blender, benchmark
The Blender Foundation is wrapping up development on Blender 2.8, “The Workflow Update”. We have been following it for a while, but today’s announcement caught me by surprise: a benchmark database. It seems simple, right? Blender wants its users to know what hardware is best to use, especially when rendering images in Cycles (which can be damn slow).
A bit lopsided...
The solution is to make a version of Blender that creates and validates benchmarks, then compiles the data on their website. It’s still early days for this, with just 2052 entries (at the time of writing) and the majority of those were from Linux boxes. Also, they only break it down into a handful of categories: Fastest CPU, Fastest Compute Device, Submissions Per OS, then a few charts that compare the individual benchmark scenes against one another in a hardware-agnostic fashion. They pledge to add a lot of more metrics in the future.
Personally, I’m curious to see a performance vs OS metric. Some benchmarks back from 2016 (Blender 2.77 on an EVGA GTX 980 Ti) show Linux out-performing Windows 10 by over 2x, with Windows 7 landing in between (closer to Linux than Windows 10). At the time, it was attributed to NVIDIA’s CUDA driver being horribly optimized for the newer OS, which seems to be validated by the close showing of the GTX 1080 on Windows 10 and Linux, but I would like to see a compiled list of up-to-date results. I could soon be able to.