The dtors can be checked at compile-time; insert a runtime check for the
monitor before finalizing the stack-allocated class object via druntime
call.
See issue #2515.
Improve robustness for TypeInfos of speculative types by only eliding
their TypeInfo definition, not the declaration of the LL global
altogether.
`DtoTypeInfoOf()` expects the LL global to be created and otherwise
fails with an assertion or segfault (e.g., issue #2357). So now only
linker errors should result in case the TypeInfo definition is missing.
Also normalize the calls to `DtoTypeInfoOf()` and revise the following
pointer bitcasts, as the LL type of forward-declared TypeInfo globals
may be opaque.
I don't know how much sense that makes, as the front-end expects a result
expression of a single bool.
LLVM returns a vector of i1 values, the pair-wise results.
From my experience with SIMD on x86_64, what's mostly needed is a vector
bit mask, as that's what the CPU returns and which is later used to mask
accesses/writes.
Anyway, due to new `Target.isVectorOpSupported()` simply allowing all ops,
(not-)equality comparisons of vectors now land here, and I reduce the
pair-wise results via integer bitcast and an additional integer
comparison.
To trigger this you had to construct or destruct an aggregate at the
null memory location, which would crash if you were to access any
members, which is required for a constructor or destructor to do
anything useful.
The part needing most attention was ddmd.root.ctfloat, ddmd.target (incl.
gen/target.cpp) and ddmd.builtin. The front-end is now prepared for
elaborate compile-time floating-point types to allow for proper cross-
compilation.
This version still uses the host's `real` type for compile-time reals,
except for MSVC hosts, which still use 64-bit doubles (when compiled with
DMD host compiler too).
Some other changes:
* semantic*() of Statements extracted from statement.d to statementsem.d
* mangle() -> mangleToBuffer()
* Identifier::string -> toChars()
* Token::float80value => floatvalue
* Dsymbol::isAggregateMember() => isMember()
* BoolExp is no more
* ddmd.root.ctfloat: LDC-specific CTFE builtins
Fixes issue #1822.
DtoResolveFunction() explicitly excludes body-less abstract functions from
being automatically declared. If we want to emit linking errors like DMD,
we need a LL function for the call (we previously didn't for abstract
functions, so it was null and LDC crashed). Thus make sure the function is
resolved and declared by invoking DtoDeclareFunction().
The front-end fills in missing init expressions (except for the nested
context pointer in most cases), so there are no implicitly initialized
fields for dynamically initialized struct literals.
The front-end is also supposed to make sure there are no overlapping
initializers by using null-expressions for overridden fields, but doesn't
in some cases (DMD issue 16471).
Instead of preferring the first one in lexical field order, now use the
lexically last one to mimic DMD.
The previous code iterated over the fields in lexical order and ignored
the initializer for a field if there were earlier declared fields with
greater offset (and an initializer expression), which is wrong for
anonymous structs inside a union:
union {
struct { int i1; long l = 123; }
struct { int i2; int x = 1; }
}
`x` was previously initialized with 0 (treated as padding).
This was hit when compiling ddmd/lexer.d:1189, where a const(T)* is
implicitly cast to a const(T*).
The DLValue ctor makes sure the LL pointer and the D type are compatible.
Previously, the transitory state only needed and valid during
generation of the LLVM IR for the function body was conflated
with the general codegen metadata for the function declaration
in IrFunction.
There is further potential for cleanup regarding the use of
gIR->func() and so on all over the code base, but this is out
of scope of this commit, which is only concerned with those
IrFunction members moved to FuncGenState.
GitHub: Fixes#1661.
We only needed that findLvalueExp() thing to 1. skip over casts and 2. to
use a (bin)assign's lhs instead of the (bin)assign's result.
Well, by making sure most (bin)assign expression actually return the lhs
lvalue, we don't have this problem anymore and only need to skip over casts
to get to the nested lvalue.
This gets rid of a hacky workaround and brings our AST traversal order
back to normal again.
Just cast it to the full lhs type, except for shift-assignments (always
perform the shift op in the lvalue precision).
The front-end likes to cast the lhs to the type it wants for the
intermediate binop. E.g., incrementing a ubyte lvalue by an integer leads
to `lval += 3` being rewritten to `cast(int)lval += 3`.
Now if we use the full lhs value for the binop, we get an rvalue result
due to the cast. I.e., `cast(int)lval` before evaluating the rhs. So what
we actually want is to load the lhs lvalue `lval` AFTER evaluating the rhs
and cast it to an integer so that the binop is performed in that
precision. The result is casted back to the lvalue type when assigning.
For binassign expressions, return the lhs lvalue directly and don't bother
casting it to the expression's type - the base types apparently always match.