Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
home:Ledest:erlang:19
erlang
1310-Fix-typos-in-erts-emulator-internal_doc.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File 1310-Fix-typos-in-erts-emulator-internal_doc.patch of Package erlang
From c2a320a122e3c73eb93a9ac05d2fce9d45702196 Mon Sep 17 00:00:00 2001 From: "Kian-Meng, Ang" <kianmeng@cpan.org> Date: Wed, 24 Nov 2021 13:06:14 +0800 Subject: [PATCH] Fix typos in erts/emulator/internal_doc/ --- erts/emulator/internal_doc/BeamAsm.md | 2 +- erts/emulator/internal_doc/CodeLoading.md | 2 +- erts/emulator/internal_doc/DelayedDealloc.md | 4 ++-- erts/emulator/internal_doc/GarbageCollection.md | 4 ++-- erts/emulator/internal_doc/PTables.md | 8 ++++---- erts/emulator/internal_doc/PortSignals.md | 4 ++-- erts/emulator/internal_doc/SuperCarrier.md | 14 +++++++------- erts/emulator/internal_doc/ThreadProgress.md | 6 +++--- erts/emulator/internal_doc/Tracing.md | 4 ++-- erts/emulator/internal_doc/beam_makeops.md | 16 ++++++++-------- erts/emulator/internal_doc/dec.erl | 2 +- 11 files changed, 33 insertions(+), 33 deletions(-) diff --git a/erts/emulator/internal_doc/CodeLoading.md b/erts/emulator/internal_doc/CodeLoading.md index cfdc1bf30a..fa5bba0643 100644 --- a/erts/emulator/internal_doc/CodeLoading.md +++ b/erts/emulator/internal_doc/CodeLoading.md @@ -56,7 +56,7 @@ different modules and returns a "magic binary" containing the internal state of each prepared module. Function `finish_loading` could take a list of such states and do the finishing of all of them in one go. -Currenlty we use the legacy BIF `erlang:load_module` which is now +Currently we use the legacy BIF `erlang:load_module` which is now implemented in Erlang by calling the above two functions in sequence. Function `finish_loading` is limited to only accepts a list with one module state as we do not yet use the multi module loading diff --git a/erts/emulator/internal_doc/DelayedDealloc.md b/erts/emulator/internal_doc/DelayedDealloc.md index 4b7c774141..8a86a70b10 100644 --- a/erts/emulator/internal_doc/DelayedDealloc.md +++ b/erts/emulator/internal_doc/DelayedDealloc.md @@ -89,7 +89,7 @@ same location in memory. The head contains pointers to begining of the list (`head.first`), and to the first block which other threads may refer to -(`head.unref_end`). Blocks between these pointers are only refered to +(`head.unref_end`). Blocks between these pointers are only referred to by the head part of the data structure which is only used by the thread owning the allocator instance. When these two pointers are not equal the thread owning the allocator instance deallocate block after @@ -137,7 +137,7 @@ If no new memory blocks are inserted into the list, it should eventually be emptied. All pointers to the list however expect to always point to something. This is solved by inserting an empty "marker" element, which only has to purpose of being there in the -absense of other elements. That is when the list is empty it only +absence of other elements. That is when the list is empty it only contains this "marker" element. ### Contention ### diff --git a/erts/emulator/internal_doc/PTables.md b/erts/emulator/internal_doc/PTables.md index ef61963a40..6b316eaa7e 100644 --- a/erts/emulator/internal_doc/PTables.md +++ b/erts/emulator/internal_doc/PTables.md @@ -113,7 +113,7 @@ the "thread progress" functionality in order to determine when it is safe to deallocate the process structure. We'll get back to this when describing deletion in the table. -Using this new lookup approach we wont modify any memory at all which +Using this new lookup approach we won't modify any memory at all which is important. A lookup conceptually only read memory, now this is true in the implementation also which is important from a scalability perspective. The previous implementation modified the cache line @@ -282,7 +282,7 @@ single cache line containing the state of the rwlock even in the case we are only read locking. Instead of using such an rwlock, we have our own implementation of reader optimized rwlocks which keeps track of reader threads in separate thread specific cache lines. This in order -to avoid contention on a singe cache line. As long as we only do read +to avoid contention on a single cache line. As long as we only do read lock operations, threads only need to read a global cache line and modify its own cache line, and by this minimize communication between involved processors. The iterating BIFs are normally very infrequently @@ -299,7 +299,7 @@ threads modify the table at the same time as we are trying to find the slot. The easy fix is to abort the operation if an empty slot could not be found in a finite number operation, and then restart the operation under a write lock. This will be implemented in next -release, but furter work should be made trying to find a better +release, but further work should be made trying to find a better solution. This and also previous implementation do not work well when the table @@ -320,7 +320,7 @@ not require exclusive access to the table while reading a sequence of slots. In principle this should be rather easy, the code can handle sequences of variable sizes, so shrinking the sequence size of slots to one would solv the problem. This will, however, need some tweeks -and modifications of not trival code, but is something that should be +and modifications of not trivial code, but is something that should be looked at in the future. By increasing the size of identifiers, at least on 64-bit machines diff --git a/erts/emulator/internal_doc/PortSignals.md b/erts/emulator/internal_doc/PortSignals.md index 8782ae4e17..f2490152ca 100644 --- a/erts/emulator/internal_doc/PortSignals.md +++ b/erts/emulator/internal_doc/PortSignals.md @@ -108,7 +108,7 @@ and a private, lock free, queue like, task data structure. This "semi locked" approach is similar to how the message boxes of processes are managed. The lock is port specific and only used for protection of port tasks, so the run queue lock is now needed in more or less the -same way for ports as for processes. This ensures that we wont see an +same way for ports as for processes. This ensures that we won't see an increased lock contention on run queue locks due to this rewrite of the port functionality. @@ -211,7 +211,7 @@ consuming, and did not really depend on the port. That is we would like to do this without having the port lock locked. In order to improve this, state information was re-organized in the -port structer, so that we can access it using atomic memory +port structure, so that we can access it using atomic memory operations. This together with the new port table implementation, enabled us to lookup the port and inspect the state before acquiring the port lock, which in turn made it possible to perform preparations diff --git a/erts/emulator/internal_doc/SuperCarrier.md b/erts/emulator/internal_doc/SuperCarrier.md index f52c6613d5..55ac5a67af 100644 --- a/erts/emulator/internal_doc/SuperCarrier.md +++ b/erts/emulator/internal_doc/SuperCarrier.md @@ -12,7 +12,7 @@ Problem ------- The initial motivation for this feature was customers asking for a way -to pre-allocate physcial memory at VM start for it to use. +to pre-allocate physical memory at VM start for it to use. Other problems were different experienced limitations of the OS implementation of mmap: @@ -29,7 +29,7 @@ fragmentation increased. Solution -------- -Allocate one large continious area of address space at VM start and +Allocate one large continuous area of address space at VM start and then use that area to satisfy our dynamic memory need during runtime. In other words: implement our own mmap. @@ -70,7 +70,7 @@ name suggest that it can be viewed as our own mmap implementation. A super carrier needs to satisfy two slightly different kinds of allocation requests; multi block carriers (MBC) and single block -carriers (SBC). They are both rather large blocks of continious +carriers (SBC). They are both rather large blocks of continuous memory, but MBCs and SBCs have different demands on alignment and size. @@ -79,13 +79,13 @@ alignment. MBCs are more restricted. They can only have a number of fixed sizes that are powers of 2. The start address need to have a very -large aligment (currently 256 kb, called "super alignment"). This is a +large alignment (currently 256 kb, called "super alignment"). This is a design choice that allows very low overhead per allocated block in the MBC. To reduce fragmentation within the super carrier, it is good to keep SBCs and MBCs apart. MBCs with their uniform alignment and sizes can be -packed very efficiently together. SBCs without demand for aligment can +packed very efficiently together. SBCs without demand for alignment can also be allocated quite efficiently together. But mixing them can lead to a lot of memory wasted when we need to create large holes of padding to the next alignment limit. @@ -102,9 +102,9 @@ The MBC area is called *sa* as in super aligned and the SBC area is called **sua** as in super un-aligned. Note that the "super" in super alignment and the "super" in super -carrier has nothing to do with each other. We could have choosen +carrier has nothing to do with each other. We could have chosen another naming to avoid confusion, such as "meta" carrier or "giant" -aligment. +alignment. +-------+ <---- sua.top | sua | diff --git a/erts/emulator/internal_doc/ThreadProgress.md b/erts/emulator/internal_doc/ThreadProgress.md index 03a802f904..a48b250104 100644 --- a/erts/emulator/internal_doc/ThreadProgress.md +++ b/erts/emulator/internal_doc/ThreadProgress.md @@ -78,7 +78,7 @@ thread progress operation has been initiated, and at least once ordered using communication via memory which makes it possible to draw conclusion about the memory state after the thread progress operation has completed. Lets call the progress made from initiation to -comletion for "thread progress". +completion for "thread progress". Assuming that the thread progress functionality is efficient, a lot of algorithms can both be simplified and made more efficient than using @@ -120,7 +120,7 @@ communication. We also want threads to be able to determine when thread progress has been made relatively fast. That is we need to have some balance -between comunication overhead and time to complete the operation. +between communication overhead and time to complete the operation. ### API ### @@ -222,7 +222,7 @@ current global value plus one at the time when we call confirmed global value plus two at this time. The above described implementation more or less minimizes the -comunication needed before we can increment the global counter. The +communication needed before we can increment the global counter. The amount of communication in the system due to the thread progress functionality however also depend on the frequency with which managed threads call `erts_thr_progress_update()`. Today each scheduler thread diff --git a/erts/emulator/internal_doc/Tracing.md b/erts/emulator/internal_doc/Tracing.md index d81739c7cb..f0182daad8 100644 --- a/erts/emulator/internal_doc/Tracing.md +++ b/erts/emulator/internal_doc/Tracing.md @@ -106,7 +106,7 @@ instantaneously without the need of external function calls. The choosen solution is instead for tracing to use the technique of replication applied on the data structures for breakpoints. Two -generations of breakpoints are kept and indentified by index of 0 and +generations of breakpoints are kept and identified by index of 0 and 1. The global atomic variables `erts_active_bp_index` will determine which generation of breakpoints running code will use. @@ -236,7 +236,7 @@ value of `erts_active_bp_index` at different times as it is read without any memory barrier. But this is the best we can do without more expensive thread synchronization. -The waiting in step 8 is to make sure we dont't restore the original +The waiting in step 8 is to make sure we don't restore the original bream instructions for disabled breakpoints until we know that no thread is still accessing the old enabled part of a disabled breakpoint. diff --git a/erts/emulator/internal_doc/dec.erl b/erts/emulator/internal_doc/dec.erl index 8ce83fa402..52ab42ebc0 100644 --- a/erts/emulator/internal_doc/dec.erl +++ b/erts/emulator/internal_doc/dec.erl @@ -24,7 +24,7 @@ %% The C header is generated from a text file containing tuples in the %% following format: %% {RevList,Translation} -%% Where 'RevList' is a reversed list of the denormalized repressentation of +%% Where 'RevList' is a reversed list of the denormalized representation of %% the character 'Translation'. An example would be the swedish character %% 'รถ', which would be represented in the file as: %% {[776,111],246}, as the denormalized representation of codepoint 246 -- 2.31.1
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor