daax
Active
Posts: 3
|
Post by daax on Nov 5, 2021 17:00:38 GMT
Hi there, I've been using your tool for quite some time to look for discrepancies between systems. I was curious though what you were using to determine the RAM CAS Latency, RAS to CAS Delay, and tRAS. I've done some system programming for a while and have used the PMCs on Intel before to acquire similar information like the UNC_DRAM_READ_CAS.CHn counter(s), and the like. Are these measurements being taken using the PMCs? Is this a direct command to the IMC?
I dug through the memory device information in the SMBIOS but wasn't able to discern any information related to the tCL/tRAS. I had also looked at the MCH specification for my particular family/generation of processor and had only come across the MCHBAR registers for DDR timing (notably, MCHBAR_CH0_CR_TC_ODT_0_0_0_MCHBAR). I understand this is not an open-source tool, but I figured it wouldn't hurt to ask how this one feature was created. I'll probably just use HalGetBusDataByOffset and the offset specified in the documentation. If there's a cleaner way to acquire this information please do correct me. And of course, thank you for the time and effort you've put into this tool. I've made sure to donate as well, as I greatly enjoy using this tool.
|
|
|
Post by siv on Nov 7, 2021 17:44:23 GMT
Welcome to the forum and it's more complicated than I suspect you think it is. It depends on the chipset as to what is needed, for systems without MCHBAR then HalGetBusDataByOffset() usually get's used, but there are some chipsets that hide devices from HalGetBusDataByOffset() so you have to directly read the PCI config space. When accessing PCIe space then depending on the version of Windows then HalGetBusDataByOffset() will not work for offsets > 256 so you have to use MmMapIoSpace() too access these via PCIEXBAR which is also how you access MCHBAR. As MmMapIoSpace() + MmUnmapIoSpace() can cause high DPC latencies you also really need a mappings cache, see Menu->Hardware->CPU Detail->Map Unmap. Reading the AMD Ryzen memory timings is different, to know how to do this you need to get access to the AMD NDA datasheets and this is also the situation for some other chipsets/IMCs. I have never come across an SMBIOS that reports memory timings and AFAIK the DSP0134 specification does not define a way to do this.
|
|
daax
Active
Posts: 3
|
Post by daax on Nov 8, 2021 17:39:40 GMT
Welcome to the forum and it's more complicated than I suspect you think it is. It depends on the chipset as to what is needed, for systems without MCHBAR then HalGetBusDataByOffset() usually get's used, but there are some chipsets that hide devices from HalGetBusDataByOffset() so you have to directly read the PCI config space. When accessing PCIe space then depending on the version of Windows then HalGetBusDataByOffset() will not work for offsets > 256 so you have to use MmMapIoSpace() too access these via PCIEXBAR which is also how you access MCHBAR. As MmMapIoSpace() + MmUnmapIoSpace() can cause high DPC latencies you also really need a mappings cache, see Menu->Hardware->CPU Detail->Map Unmap. Reading the AMD Ryzen memory timings is different, to know how to do this you need to get access to the AMD NDA datasheets and this is also the situation for some other chipsets/IMCs. I have never come across an SMBIOS that reports memory timings and AFAIK the DSP0134 specification does not define a way to do this.
I had assumed it was behind NDA once I was digging through the manuals / datasheets and had no luck. I got ahold of the NDA PPR that describes accessing the SMN registers. It's a bit of a pain that they did that, but oh well. It seems the CCX/NBIO registers and FastRegs BAR for MMIO SMN registers are in Vol 2. PPR. Only took me two days to find them in my stack... I was trying to write a nice interface to access the controllers regardless of platforms, but AMD always likes to throw a wrench in my plans. Regardless, I was able to get it working fine on Intel machines. Thanks for the information, it's much appreciated!
|
|
|
Post by siv on Nov 8, 2021 18:05:35 GMT
but AMD always likes to throw a wrench in my plans. I am happy to hear you made some progress, but it's not just AMD who do this, some Intel Atom CPUs do similar as does the VIA Centaur. I feel I should mention locking, all of AIDA64 + CPUZ + HWiNFO + SIV + ... use the Global\Access_PCI mutex to interlock access to PCI config space and you should also use this. It's all easy enough, but several utilities created it with the incorrect protection so I sent their authors the following example of how to correctly create all the mutexes we use, see Menu->Help->Lock Handle. HANDLE CreateWorldMutex( CONST TCHAR *nam ) // Create/Open a Mutex with { // appropriate protection HANDLE mhl; // Mutex Handle SID *sid; // Security ID SECURITY_ATTRIBUTES sab[ 1 ]; // Security Attributes Block SECURITY_DESCRIPTOR sdb[ 1 ]; // Security Descriptor Block ACL acl[ 32 ]; // ACL Area SID_IDENTIFIER_AUTHORITY swa[ 1 ] = SECURITY_WORLD_SID_AUTHORITY; // World access TCHAR gtb[ 256 ]; // Global\\ text buffer
InitializeSecurityDescriptor( sdb, SECURITY_DESCRIPTOR_REVISION ); // setup Security Descriptor
if( ( sid = NULL, // in case AllocateAndInitializeSid fails AllocateAndInitializeSid( swa, // SID Identifier Authority 1, // Sub Authority count SECURITY_WORLD_RID, // Sub Authority 0 0, // Sub Authority 1 0, // Sub Authority 2 0, // Sub Authority 3 0, // Sub Authority 4 0, // Sub Authority 5 0, // Sub Authority 6 0, // Sub Authority 7 &sid ) ) && // returned SID ( InitializeAcl( acl, // ACL setup OK and sizeof( acl ), // ACL_REVISION ) ) && // ( AddAccessAllowedAce( acl, // ACE setup OK and ACL_REVISION, // MUTANT_ALL_ACCESS, // Access Rights Mask sid ) ) ) // SetSecurityDescriptorDacl( sdb, TRUE, acl, FALSE ); // yes, setup world access else // SetSecurityDescriptorDacl( sdb, TRUE, NULL, FALSE ); // no, setup with default
sab->nLength = sizeof( sdb ); // setup Security Attributes Block sab->bInheritHandle = FALSE; // sab->lpSecurityDescriptor = sdb; //
stprintf( gtb, TEXT( "Global\\%s" ), nam ); // name with Global\ prefix
if( ( ( mhl = CreateMutex( sab, // Create/Open with Global\ Unprotected or FALSE, // gtb ) ) != NULL ) || // ( ( mhl = OpenMutex( READ_CONTROL | MUTANT_QUERY_STATE | SYNCHRONIZE, // Open with Global\ Protected or (probably Aquasuite) FALSE, // gtb ) ) != NULL ) || // ( ( mhl = CreateMutex( sab, // Create/Open with no prefix Unprotected ? FALSE, // nam ) ) != NULL ) ) {} // if( sid ) // need to free the SID ? FreeSid( sid ); // yes, free it
return mhl; // return the handle }
|
|
daax
Active
Posts: 3
|
Post by daax on Nov 12, 2021 21:41:34 GMT
but AMD always likes to throw a wrench in my plans. I feel I should mention locking, all of AIDA64 + CPUZ + HWiNFO + SIV + ... use the Global\Access_PCI mutex to interlock access to PCI config space and you should also use this. It's all easy enough, but several utilities created it with the incorrect protection so I sent their authors the following example of how to correctly create all the mutexes we use, see Menu->Help->Lock Handle. Ah! Excellent. Thank you for bringing that to my attention. I'll be sure to use this.
|
|