I.e., *define* templated symbols in each referencing compilation unit
when using discardable linkonce_odr linkage, analogous to C++.
This makes each compilation unit self-sufficient wrt. templated symbols,
which also means increased opportunity for inlining and less need for
LTO. There should be no more undefined symbol issues caused by buggy
template culling.
The biggest advantage is that the optimizer can discard unused
linkonce_odr symbols early instead of optimizing and forwarding to the
assembler. So this is especially useful with -O to decrease compilation
times and can at least in some scenarios greatly outweigh the
(potentially very much) higher number of symbols defined by the glue
layer.
Libraries compiled with -linkonce-templates can generally not be linked
against dependent code compiled without -linkonce-templates; the other
way around works.
Incl. making sure `-cov=N ... -cov[=ctfe]` doesn't reset the required
percentage to 0.
Use a dummy *bool* option for a better help output (displaying `--cov`,
not `--cov=<value>`).
Instead, error out whenever requested by an expression-less
`synchronized` statement, including source LoC to track it down.
This is safer, especially since the previous initial warning may likely
be suppressed, and makes this host-agnostic.
Also suppress previous warnings about unknown `` and `none` OS, treating
these like `unknown`.
This is a cherry-pick from dlang/dmd#10752. Rainer has found out that
the compiler might crash with a segfault when aborting via exit() upon
some compile error, and that this seems to be related to GC worker
threads (so only an issue with recent host compilers), spawned because
of some module ctors bypassing `root/rmem.d` and using the GC directly,
like setting up an associative array in `imphint.d`.
He came up with a nice simple solution to this, making sure the GC
starts in disabled mode whenever it is initialized.
If the GC is enabled (-lowmem), it must know about those Array instances,
so that the GC-allocated array of pointers and referenced GC-allocated
strings are kept alive.
Ignore it if the default triple (as opposed to the host) has the desired
bitness already. So only relevant if the default triple doesn't match
the host. I think that should render LDC-specific NO_ARCH_VARIANT in
dmd-testsuite's Makefile obsolete (if set, d_do_test won't add -m32/-m64
by default).
Also fix the config file section lookup in case both -m32 and -m64 are
specified.
And use the wide API for pure is-env-variable-set checks too, as the
first call to a narrow env API function would lead to the C runtime
preparing and maintaining both narrow and wide environments.
only)
This makes _d_wrun_main (cherry-picked from dlang/druntime#2701) use the
provided args directly instead of the process's real arguments (on
Windows) - if the host D compiler supports it.
E.g., this is required when passing --DRT-* options from a response file
to _d_wrun_main.
As a major change, the encoding of the Windows cmdline arguments is
switched from the current codepage to UTF-8.
Note that the MinGW-based libs currently only provide narrow CRT entry
points.
Only display the appropriate usage help (and then fail) if invoked
without any explicit cmdline options. Otherwise emit an error about
missing source files and fail immediately, without displaying the usage
help.
Besides making LDC and LDMD behave identically in this regard, it makes
just more sense IMO (when forgetting to specify a file, LDC previously
just printed the cmdline help without any error message).
It also makes `ldmd2 -transition=?` and `ldmd2 -preview=help` etc. print
the expected help without LDMD special cases.
This is a breaking change, conforming to new DMD semantics.
The previous semantics were inconsistent, as -{enable,disable}-asserts
and -boundscheck (as well as new -{enable,disable}-switch-errors)
weren't overridden.