numa

  • 网络内存访问;非一致性内存访问(non-uniform memory access);非一致内存访问

numanuma

numa

内存访问

在无统一内存访问 (NUMA) 启用系统将为每个硬件 NUMA 节点对应的内存节点项。对 SMP 系统中将有一个单独的内存节点条 …

非一致性内存访问(non-uniform memory access)

非一致性内存访问 (NUMA) 体系结构中,访问“远距离”节点中的内存位置所需的时间较长并且可能严重影响性能。了解基础内 …

非一致内存访问

使用非一致内存访问 (NUMA) 时,将连接与特定处理器关联。为 TCP 端口创建 TDS 端点,恢复对默认端点的访问权限(如果 …

非均匀访存模型

在带有非均匀访存模型NUMA)的系统中,内存 的分配是NUMA 认知的,即尽可能地从本地节点分配内 存。 超大页面 支持H…

非均匀存储器存取

非均匀存储器存取(NUMA)数据处理系统的中断体系结构   加里·D·卡彭特;菲利浦·L·德巴克;马克·E·迪安;戴维·B·格拉克;罗纳德·L· …

非均匀存储访问

非均匀存储访问NUMA)是一种并行模型,属于DSM这一类。在NUMA体系结构中,每个处理器与自已的本地存储器和高速 …

1
The main thing done in the relational database was to use "soft NUMA" and port mapping to get a good distribution of work within the system. 我们在关系型数据库中完成了被称为“SoftNUMA”的技术,它通过端口映射在系统内部得以获得良好的分布式工作效果。
2
All traffic enters through a single port and is distributed on a round-robin basis to any available NUMA node. 所有通信流量都通过一个单独的端口输入并分布到任何可用的NUMA节点。
3
To understand how pages of memory from the buffer cache are assigned when using NUMA, see Growing and Shrinking the Buffer Pool Under NUMA. 若要了解使用NUMA时如何分配缓冲区高速缓存中的内存页,请参阅使用NUMA扩展和收缩缓冲池。
4
Systems with a large number of processors may find it advantageous to recompile against the NUMA user-land API's added in RHEL4. 在拥有大量处理器的系统中,可能会发现借助RHEL4中所增加的NUMA用户空间API进行重新编译会有好处。
5
NUMA, like SMP, allows users to harness the combined power of multiple processors, with each processor accessing a common memory pool. numa与smp相似,让用户能驾驭多个处理器结合起来的能力,每个处理器能存取一个公共的存储器组。
6
NUMA reduces the contention for a system's shared memory bus by having more memory buses and fewer processors on each bus. NUMA通过在每个总线使用更多内存总线和更少处理器来减少系统共享内存总线的争用。
7
Any operation running on a single NUMA node can only use buffer pages from that node . 针对单个NUMA节点执行的任何操作都只能使用该节点中的缓冲区页。
8
The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio. 访问外部内存的开销与访问本地内存的开销比率称为NUMA比率。
9
For high-end machines, new features target performance improvements, scalability, throughput, and NUMA support for SMP machines. 对高端的机器来说,新特性针对的是性能改进、可扩展性、吞吐率,以及对SMP机器NUMA的支持。
10
The number of CPUs within a NUMA node depends on the hardware vendor. NUMA节点中的CPU数量取决于硬件供应商。
11
This provides automatic load balancing among the NUMA nodes . 它提供了NUMA节点间的自动负载平衡。
12
On a mail-server benchmark, we show a 39% improvement in performance by automatically splitting the application among multiple NUMA domains. 在邮件服务器的测试评分中,通过自动在多个NUMA域中切分应用程序,我们的性能得到了39%的提升。
13
XXI. To begin from Romulus : he left no children, and Numa Pompilius left none that could be of use to the republic. 就从罗慕路斯开始吧,他没有子嗣,努马·蓬皮利乌斯也没有留下对国家有用的孩子。
14
Within a NUMA node, the connection is run on the least loaded scheduler on that node. 在NUMA节点内,连接按照该节点上负载最小的计划程序运行。
15
The NUMA architecture was designed to surpass the scalability limits of the SMP architecture . NUMA体系结构在设计上已超越了SMP体系结构在伸缩性上的限制。
16
Not just for SMP or NUMA, but for everything from a single-node UP system to a massively clustered system. 不仅是SMP或NUMA,而是从一个单一的操作系统点发展到巨大的操作系统群组。
17
In NUMA systems, each processor is close to some parts of memory and further from others. 在NUMA系统中,每个处理器距某部分内存较近而距其他内存较远。
18
In a NUMA architected system, CPUs are arranged in smaller sub-systems called pods. 在NUMA架构的系统中,CPU排列在叫做pods的较小的子系统中。
19
The NUMA architecture can increase processor speed without increasing the load on the processor bus. NUMA体系结构可以在不增加处理器总线负载的情况下提高处理器速度。
20
This topic describes how pages of memory from the buffer pool are assigned when using non-uniform memory access (NUMA). 本主题介绍,在使用非一致性内存访问(NUMA)时,如何分配缓冲池中的内存页。
21
I design and implement the method of fault-containment and recovery arithmetic, effectively solve the problem of fault in CC -NUMA computer. 设计并实现了故障限制方法和故障恢复算法,有效的解决了CC-NUMA计算机的故障处理问题。
22
NUMA architecture provides a scalable solution to this problem. NUMA体系结构为此问题提供了可扩展的解决方案。
23
Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory. 由于NUMA同时使用本地内存和外部内存,因此,访问某些内存区域的时间会比访问其他内存区域的要长。
24
All NUMA topics have been reorganized for this release. 已重新组织了此版本中的所有NUMA主题。
25
Applications seeking additional performance gains can use user-land NUMA APIs. 设法提高性能的应用程序可以使用user-landNUMAAPI。
26
Similarly, buffer pool pages are distributed across hardware NUMA nodes. 同样,缓冲池页将跨硬件NUMA节点进行分布。
27
On NUMA hardware, some regions of memory are on physically different buses from other regions. 在NUMA硬件上,有些内存区域与其他区域位于不同的物理总线上。
28
When using NUMA, the max server memory and min server memory values are divided evenly among NUMA nodes. 使用NUMA时,会在NUMA节点之间平均划分maxservermemory和minservermemory的值。
29
That means when users run out of capacity on their SMP servers, they can move their applications to NUMA servers with relative ease. 这意味着,当用户用尽SMP服务器的能力时,他们能较容易地将其应用程序移到NUMA服务器上。
30
NUMA hardware is provided by the computer manufacturer. NUMA硬件由计算机制造商提供。