A command like "make -j 2 check-gcc-c check-gcc-c++" run in the top level of
a fresh build directory does not work reliably. That will spawn two
independent make processes inside the "gcc" directory, and each of those
will attempt to create site.exp if it doesn't exist and will interfere with
each other, producing often a corrupted or empty site.exp. Resolve that by
making these targets depend on a new phony target which makes sure site.exp
is created first before starting the recursive makes.
ChangeLog:
* Makefile.in: Regenerate.
* Makefile.tpl: Add dependency on site.exp to check-gcc-* targets
The various decls relating to rich_location are in
libcpp/include/line-map.h, but they don't relate to line maps.
Split them out to their own header: libcpp/include/rich-location.h
No functional change intended.
gcc/ChangeLog:
* Makefile.in (CPPLIB_H): Add libcpp/include/rich-location.h.
* coretypes.h (class rich_location): New forward decl.
gcc/analyzer/ChangeLog:
* analyzer.h: Include "rich-location.h".
gcc/c-family/ChangeLog:
* c-lex.cc: Include "rich-location.h".
gcc/cp/ChangeLog:
* mapper-client.cc: Include "rich-location.h".
gcc/ChangeLog:
* diagnostic.h: Include "rich-location.h".
* edit-context.h (class fixit_hint): New forward decl.
* gcc-rich-location.h: Include "rich-location.h".
* genmatch.cc: Likewise.
* pretty-print.h: Likewise.
gcc/rust/ChangeLog:
* rust-location.h: Include "rich-location.h".
libcpp/ChangeLog:
* Makefile.in (TAGS_SOURCES): Add "include/rich-location.h".
* include/cpplib.h (class rich_location): New forward decl.
* include/line-map.h (class range_label)
(enum range_display_kind, struct location_range)
(class semi_embedded_vec, class rich_location, class label_text)
(class range_label, class fixit_hint): Move to...
* include/rich-location.h: ...this new file.
* internal.h: Include "rich-location.h".
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This patch:
- adds support to the analyzer for tracking API-private state
or which we don't have a decl (such as strtok's internal state),
- uses it to implement a new -Wanalyzer-undefined-behavior-strtok which
warns when strtok (NULL, delim) is called as the first call to
strtok after main.
gcc/analyzer/ChangeLog:
PR analyzer/107573
* analyzer.h (register_known_functions): Add region_model_manager
param.
* analyzer.opt (Wanalyzer-undefined-behavior-strtok): New.
* call-summary.cc
(call_summary_replay::convert_region_from_summary_1): Handle
RK_PRIVATE.
* engine.cc (impl_run_checkers): Pass model manager to
register_known_functions.
* kf.cc (class undefined_function_behavior): New.
(class kf_strtok): New.
(register_known_functions): Add region_model_manager param.
Use it to register "strtok".
* region-model-manager.cc
(region_model_manager::get_or_create_conjured_svalue): Add "idx"
param.
* region-model-manager.h
(region_model_manager::get_or_create_conjured_svalue): Add "idx"
param.
(region_model_manager::get_root_region): New accessor.
* region-model.cc (region_model::scan_for_null_terminator): Handle
"expr" being null.
(region_model::get_representative_path_var_1): Handle RK_PRIVATE.
* region-model.h (region_model::called_from_main_p): Make public.
* region.cc (region::get_memory_space): Handle RK_PRIVATE.
(region::can_have_initial_svalue_p): Handle MEMSPACE_PRIVATE.
(private_region::dump_to_pp): New.
* region.h (MEMSPACE_PRIVATE): New.
(RK_PRIVATE): New.
(class private_region): New.
(is_a_helper <const private_region *>::test): New.
* store.cc (store::replay_call_summary_cluster): Handle
RK_PRIVATE.
* svalue.h (struct conjured_svalue::key_t): Add "idx" param to
ctor and "m_idx" field.
(class conjured_svalue::conjured_svalue): Likewise.
gcc/ChangeLog:
PR analyzer/107573
* doc/invoke.texi: Add -Wanalyzer-undefined-behavior-strtok.
gcc/testsuite/ChangeLog:
PR analyzer/107573
* c-c++-common/analyzer/strtok-1.c: New test.
* c-c++-common/analyzer/strtok-2.c: New test.
* c-c++-common/analyzer/strtok-3.c: New test.
* c-c++-common/analyzer/strtok-4.c: New test.
* c-c++-common/analyzer/strtok-cppreference.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This optimizes the simple case of formatting a single string, integer
or bool, with no format-specifier (so no padding, alignment, alternate
form etc.)
libstdc++-v3/ChangeLog:
PR libstdc++/110801
* include/std/format (_Sink_iter::_M_reserve): New member
function.
(_Sink::_Reservation): New nested class.
(_Sink::_M_reserve, _Sink::_M_bump): New virtual functions.
(_Seq_sink::_M_reserve, _Seq_sink::_M_bump): New virtual
overrides.
(_Iter_sink<O, ContigIter>::_M_reserve): Likewise.
(__do_vformat_to): Use new functions to optimize "{}" case.
Even if !HAVE_AS_SUPPORT_CALL36, const_call_insn_operand should still
return false when -mexplict-relocs=none -mcmodel=medium to make
loongarch_legitimize_call_address emit la.local or la.global.
gcc/ChangeLog:
* config/loongarch/predicates.md (const_call_insn_operand):
Remove buggy "HAVE_AS_SUPPORT_CALL36" conditions. Change "1" to
"true" to make the coding style consistent.
This option (CPUCFG word 0x3 bit 23) means "the hardware guarantee that
two loads on the same address won't be reordered with each other". Thus
we can omit the "load-load" barrier dbar 0x700.
This is only a micro-optimization because dbar 0x700 is already treated
as nop if the hardware supports LD_SEQ_SA.
gcc/ChangeLog:
* config/loongarch/loongarch.cc (loongarch_print_operand): Don't
print dbar 0x700 if TARGET_LD_SEQ_SA.
* config/loongarch/sync.md (atomic_load<mode>): Likewise.
With -mdiv32, we can assume div.w[u] and mod.w[u] works on low 32 bits
of a 64-bit GPR even if it's not sign-extended.
gcc/ChangeLog:
* config/loongarch/loongarch.md (DIV): New mode iterator.
(<optab:ANY_DIV><mode:GPR>3): Don't expand if TARGET_DIV32.
(<optab:ANY_DIV>di3_fake): Disable if TARGET_DIV32.
(*<optab:ANY_DIV><mode:GPR>3): Allow SImode if TARGET_DIV32.
(<optab:ANY_DIV>si3_extended): New insn if TARGET_DIV32.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/div-div32.c: New test.
* gcc.target/loongarch/div-no-div32.c: New test.
* config/loongarch/loongarch-def.h:
(loongarch_isa_base_features): Declare. Define it in ...
* config/loongarch/loongarch-cpu.cc
(loongarch_isa_base_features): ... here.
(fill_native_cpu_config): If we know the base ISA of the CPU
model from PRID, use it instead of la64 (v1.0). Check if all
expected features of this base ISA is available, emit a warning
if not.
* config/loongarch/loongarch-opts.cc (config_target_isa): Enable
the features implied by the base ISA if not -march=native.
LoongArch v1.10 introduced the concept of ISA evolution. During ISA
evolution, many independent features can be added and enumerated via
CPUCFG.
Add a data file into genopts storing the CPUCFG word, bit, the name
of the command line option controlling if this feature should be used
for compilation, and the text description. Make genstr.sh process these
info and add the command line options into loongarch.opt and
loongarch-str.h, and generate a new file loongarch-cpucfg-map.h for
mapping CPUCFG output to the corresponding option. When handling
-march=native, use the information in loongarch-cpucfg-map.h to generate
the corresponding option mask. Enable the features implied by -march
setting unless the user has explicitly disabled the feature.
The added options (-mdiv32 and -mld-seq-sa) are not really handled yet.
They'll be used in the following patches.
gcc/ChangeLog:
* config/loongarch/genopts/isa-evolution.in: New data file.
* config/loongarch/genopts/genstr.sh: Translate info in
isa-evolution.in when generating loongarch-str.h, loongarch.opt,
and loongarch-cpucfg-map.h.
* config/loongarch/genopts/loongarch.opt.in (isa_evolution):
New variable.
* config/loongarch/t-loongarch: (loongarch-cpucfg-map.h): New
rule.
(loongarch-str.h): Depend on isa-evolution.in.
(loongarch.opt): Depend on isa-evolution.in.
(loongarch-cpu.o): Depend on loongarch-cpucfg-map.h.
* config/loongarch/loongarch-str.h: Regenerate.
* config/loongarch/loongarch-def.h (loongarch_isa): Add field
for evolution features. Add helper function to enable features
in this field.
Probe native CPU capability and save the corresponding options
into preset.
* config/loongarch/loongarch-cpu.cc (fill_native_cpu_config):
Probe native CPU capability and save the corresponding options
into preset.
(cache_cpucfg): Simplify with C++11-style for loop.
(cpucfg_useful_idx, N_CPUCFG_WORDS): Move to ...
* config/loongarch/loongarch.cc
(loongarch_option_override_internal): Enable the ISA evolution
feature options implied by -march and not explicitly disabled.
(loongarch_asm_code_end): New function, print ISA information as
comments in the assembly if -fverbose-asm. It makes easier to
debug things like -march=native.
(TARGET_ASM_CODE_END): Define.
* config/loongarch/loongarch.opt: Regenerate.
* config/loongarch/loongarch-cpucfg-map.h: Generate.
(cpucfg_useful_idx, N_CPUCFG_WORDS) ... here.
On LA664, the PRID preset is ISA_BASE_LA64V110 but the base architecture
is guessed ISA_BASE_LA64V100. This causes a warning to be outputed:
cc1: warning: base architecture 'la64' differs from PRID preset '?'
But we've not set the "?" above in loongarch_isa_base_strings, thus it's
a nullptr and then an ICE is triggered.
Add ISA_BASE_LA64V110 to genopts and initialize
loongarch_isa_base_strings[ISA_BASE_LA64V110] correctly to fix the ICE.
The warning itself will be fixed later.
gcc/ChangeLog:
* config/loongarch/genopts/loongarch-strings:
(STR_ISA_BASE_LA64V110): Add.
* config/loongarch/genopts/loongarch.opt.in:
(ISA_BASE_LA64V110): Add.
* config/loongarch/loongarch-def.c
(loongarch_isa_base_strings): Initialize [ISA_BASE_LA64V110]
to STR_ISA_BASE_LA64V110.
* config/loongarch/loongarch.opt: Regenerate.
* config/loongarch/loongarch-str.h: Regenerate.
The code coverage support uses counters to determine which edges in the control
flow graph were executed. If a counter overflows, then the code coverage
information is invalid. Therefore the counter type should be a 64-bit integer.
In multi-threaded applications, it is important that the counter increments are
atomic. This is not the case by default. The user can enable atomic counter
increments through the -fprofile-update=atomic and
-fprofile-update=prefer-atomic options.
If the target supports 64-bit atomic operations, then everything is fine. If
not and -fprofile-update=prefer-atomic was chosen by the user, then non-atomic
counter increments will be used. However, if the target does not support the
required atomic operations and -fprofile-atomic=update was chosen by the user,
then a warning was issued and as a forced fallback to non-atomic operations was
done. This is probably not what a user wants. There is still hardware on the
market which does not have atomic operations and is used for multi-threaded
applications. A user which selects -fprofile-update=atomic wants consistent
code coverage data and not random data.
This patch removes the fallback to non-atomic operations for
-fprofile-update=atomic the target platform supports libatomic. To
mitigate potential performance issues an optimization for systems which
only support 32-bit atomic operations is provided. Here, the edge
counter increments are done like this:
low = __atomic_add_fetch_4 (&counter.low, 1, MEMMODEL_RELAXED);
high_inc = low == 0 ? 1 : 0;
__atomic_add_fetch_4 (&counter.high, high_inc, MEMMODEL_RELAXED);
In gimple_gen_time_profiler() this split operation cannot be used, since the
updated counter value is also required. Here, a library call is emitted. This
is not a performance issue since the update is only done if counters[0] == 0.
gcc/c-family/ChangeLog:
* c-cppbuiltin.cc (c_cpp_builtins): Define
__LIBGCC_HAVE_LIBATOMIC for libgcov.
gcc/ChangeLog:
* doc/invoke.texi (-fprofile-update): Clarify default method. Document
the atomic method behaviour.
* tree-profile.cc (enum counter_update_method): New.
(counter_update): Likewise.
(gen_counter_update): Use counter_update_method. Split the
atomic counter update in two 32-bit atomic operations if
necessary.
(tree_profiling): Select counter_update_method.
libgcc/ChangeLog:
* libgcov.h (GCOV_SUPPORTS_ATOMIC): Always define it.
Set it also to 1, if __LIBGCC_HAVE_LIBATOMIC is defined.
Move the counter update to the new gen_counter_update() helper function. Use
it in gimple_gen_edge_profiler() and gimple_gen_time_profiler(). The resulting
gimple instructions should be identical with the exception of the removed
unshare_expr() call. The unshare_expr() call was used in
gimple_gen_edge_profiler().
gcc/ChangeLog:
* tree-profile.cc (gen_assign_counter_update): New.
(gen_counter_update): Likewise.
(gimple_gen_edge_profiler): Use gen_counter_update().
(gimple_gen_time_profiler): Likewise.
gcc/ChangeLog:
* config/riscv/riscv-target-attr.cc
(riscv_target_attr_parser::parse_arch): Use char[] for
std::unique_ptr to prevent mismatched new delete issue.
(riscv_process_one_target_attr): Ditto.
(riscv_process_target_attr): Ditto.
Missing from earlier commit, which removed the only use of those two
variables.
gcc/testsuite/ChangeLog:
* gfortran.dg/coarray/caf.exp: Remove unused variable.
* gfortran.dg/dg.exp: Remove unused variable.
Because the la464 memory model design allows the same address load out of order,
so in the following test example, the Load of 23 lines may be executed first over
the load of 21 lines, resulting in an error.
So when memmodel is MEMMODEL_RELAXED, the load instruction will be followed by
"dbar 0x700" when implementing _atomic_load.
1 void *
2 gomp_ptrlock_get_slow (gomp_ptrlock_t *ptrlock)
3 {
4 int *intptr;
5 uintptr_t oldval = 1;
6
7 __atomic_compare_exchange_n (ptrlock, &oldval, 2, false,
8 MEMMODEL_RELAXED, MEMMODEL_RELAXED);
9
10 /* futex works on ints, not pointers.
11 But a valid work share pointer will be at least
12 8 byte aligned, so it is safe to assume the low
13 32-bits of the pointer won't contain values 1 or 2. */
14 __asm volatile ("" : "=r" (intptr) : "0" (ptrlock));
15 #if __BYTE_ORDER == __BIG_ENDIAN
16 if (sizeof (*ptrlock) > sizeof (int))
17 intptr += (sizeof (*ptrlock) / sizeof (int)) - 1;
18 #endif
19 do
20 do_wait (intptr, 2);
21 while (__atomic_load_n (intptr, MEMMODEL_RELAXED) == 2);
22 __asm volatile ("" : : : "memory");
23 return (void *) __atomic_load_n (ptrlock, MEMMODEL_ACQUIRE);
24 }
gcc/ChangeLog:
* config/loongarch/sync.md (atomic_load<mode>): New template.
1. short and char type calls for atomic_add_fetch and __atomic_fetch_add are
implemented using amadd{_db}.{b/h}.
2. Use amcas{_db}.{b/h/w/d} to implement __atomic_compare_exchange_n and __atomic_compare_exchange.
3. The short and char types of the functions __atomic_exchange and __atomic_exchange_n are
implemented using amswap{_db}.{b/h}.
gcc/ChangeLog:
* config/loongarch/loongarch-def.h: Add comments.
* config/loongarch/loongarch-opts.h (ISA_BASE_IS_LA64V110): Define macro.
* config/loongarch/loongarch.cc (loongarch_memmodel_needs_rel_acq_fence):
Remove redundant code implementations.
* config/loongarch/sync.md (d): Added QI, HI support.
(atomic_add<mode>): New template.
(atomic_exchange<mode>_short): Likewise.
(atomic_cas_value_strong<mode>_amcas): Likewise..
(atomic_fetch_add<mode>_short): Likewise.
When compiling with '-mcmodel=medium', the function call is made through
'pcaddu18i+jirl' if binutils supports call36, otherwise the
native implementation 'pcalau12i+jirl' is used.
gcc/ChangeLog:
* config.in: Regenerate.
* config/loongarch/loongarch-opts.h (HAVE_AS_SUPPORT_CALL36): Define macro.
* config/loongarch/loongarch.cc (loongarch_legitimize_call_address):
If binutils supports call36, the function call is not split over expand.
* config/loongarch/loongarch.md: Add call36 generation code.
* config/loongarch/predicates.md: Likewise.
* configure: Regenerate.
* configure.ac: Check whether binutils supports call36.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/func-call-medium-5.c: If the assembler supports call36,
the test is abandoned.
* gcc.target/loongarch/func-call-medium-6.c: Likewise.
* gcc.target/loongarch/func-call-medium-7.c: Likewise.
* gcc.target/loongarch/func-call-medium-8.c: Likewise.
* lib/target-supports.exp: Added a function to see if the assembler supports
the call36 relocation.
* gcc.target/loongarch/func-call-medium-call36-1.c: New test.
* gcc.target/loongarch/func-call-medium-call36.c: New test.
Co-authored-by: Xi Ruoyao <xry111@xry111.site>
This patch implements a new analyzer warning: -Wanalyzer-infinite-loop.
It works by examining the exploded graph once the latter has been
fully built. It attempts to detect cycles in the exploded graph in
which:
- no externally visible work occurs
- no escape is possible from the cycle once it has been entered
- the program state is "sufficiently concrete" at each step:
- no unknown activity could be occurring
- the worklist was fully drained for each enode in the cycle
i.e. every enode in the cycle is processed
For example, it correctly complains about this bogus "for" loop:
int sum = 0;
for (struct node *iter = n; iter; iter->next)
sum += n->val;
return sum;
like this:
infinite-loop-linked-list.c: In function ‘for_loop_noop_next’:
infinite-loop-linked-list.c:110:31: warning: infinite loop [CWE-835] [-Wanalyzer-infinite-loop]
110 | for (struct node *iter = n; iter; iter->next)
| ^~~~
‘for_loop_noop_next’: events 1-5
|
| 110 | for (struct node *iter = n; iter; iter->next)
| | ^~~~
| | |
| | (1) infinite loop here
| | (2) when ‘iter’ is non-NULL: always following ‘true’ branch...
| | (5) ...to here
| 111 | sum += n->val;
| | ~~~~~~~~~~~~~
| | | |
| | | (3) ...to here
| | (4) looping back...
|
gcc/ChangeLog:
PR analyzer/106147
* Makefile.in (ANALYZER_OBJS): Add analyzer/infinite-loop.o.
* doc/invoke.texi: Add -fdump-analyzer-infinite-loop and
-Wanalyzer-infinite-loop. Add missing CWE link for
-Wanalyzer-infinite-recursion.
* timevar.def (TV_ANALYZER_INFINITE_LOOPS): New.
gcc/analyzer/ChangeLog:
PR analyzer/106147
* analyzer.opt (Wanalyzer-infinite-loop): New option.
(fdump-analyzer-infinite-loop): New option.
* checker-event.h (start_cfg_edge_event::get_desc): Drop "final".
(start_cfg_edge_event::maybe_describe_condition): Convert from
private to protected.
* checker-path.h (checker_path::get_logger): New.
* diagnostic-manager.cc (process_worklist_item): Update for
new context param of maybe_update_for_edge.
* engine.cc
(impl_region_model_context::impl_region_model_context): Add
out_could_have_done_work param to both ctors and use it to
initialize mm_out_could_have_done_work.
(impl_region_model_context::maybe_did_work): New vfunc
implementation.
(exploded_node::on_stmt): Add out_could_have_done_work param and
pass to ctxt ctor.
(exploded_node::on_stmt_pre): Treat setjmp and longjmp as "doing
work".
(exploded_node::on_longjmp): Likewise.
(exploded_edge::exploded_edge): Add "could_do_work" param and use
it to initialize m_could_do_work_p.
(exploded_edge::dump_dot_label): Add result of could_do_work_p.
(exploded_graph::add_function_entry): Mark edge as doing no work.
(exploded_graph::add_edge): Add "could_do_work" param and pass to
exploded_edge ctor.
(add_tainted_args_callback): Treat as doing no work.
(exploded_graph::process_worklist): Likewise when merging nodes.
(maybe_process_run_of_before_supernode_enodes::item): Likewise.
(exploded_graph::maybe_create_dynamic_call): Likewise.
(exploded_graph::process_node): Likewise for phi nodes.
Pass in a "could_have_done_work" bool when handling stmts and use
when creating edges. Assume work is done at bifurcation.
(exploded_path::feasible_p): Update for new context param of
maybe_update_for_edge.
(feasibility_state::feasibility_state): New ctor.
(feasibility_state::operator=): New.
(feasibility_state::maybe_update_for_edge): Add ctxt param and use
it. Fix missing newline when logging state.
(impl_run_checkers): Call exploded_graph::detect_infinite_loops.
* exploded-graph.h
(impl_region_model_context::impl_region_model_context): Add
out_could_have_done_work param to both ctors.
(impl_region_model_context::maybe_did_work): New decl.
(impl_region_model_context::checking_for_infinite_loop_p): New.
(impl_region_model_context::on_unusable_in_infinite_loop): New.
(impl_region_model_context::m_out_could_have_done_work): New
field.
(exploded_node::on_stmt): Add "out_could_have_done_work" param.
(exploded_edge::exploded_edge): Add "could_do_work" param.
(exploded_edge::could_do_work_p): New accessor.
(exploded_edge::m_could_do_work_p): New field.
(exploded_graph::add_edge): Add "could_do_work" param.
(exploded_graph::detect_infinite_loops): New decl.
(feasibility_state::feasibility_state): New ctor.
(feasibility_state::operator=): New decl.
(feasibility_state::maybe_update_for_edge): Add ctxt param.
* infinite-loop.cc: New file.
* program-state.cc (program_state::on_edge): Log the rejected
constraint when region_model::maybe_update_for_edge fails.
* region-model.cc (region_model::on_assignment): Treat any writes
other than to the stack as "doing work".
(region_model::on_stmt_pre): Treat all asm stmts as "doing work".
(region_model::on_call_post): Likewise for all calls to functions
with unknown side effects.
(region_model::handle_phi): Add svals_changing_meaning param.
Mark widening svalue in phi nodes as changing meaning.
(unusable_in_infinite_loop_constraint_p): New.
(region_model::add_constraint): If we're checking for an infinite
loop, bail out on unusable svalues, or if we don't have a definite
true/false for the constraint.
(region_model::update_for_phis): Gather all svalues changing
meaning in phi nodes, and purge constraints involving them.
(region_model::replay_call_summary): Treat all call summaries as
doing work.
(region_model::can_merge_with_p): Purge constraints involving
svalues that change meaning.
(model_merger::on_widening_reuse): New.
(test_iteration_1): Likewise.
(selftest::test_iteration_1): Remove assertion that model6 "knows"
that i < 157.
* region-model.h (region_model::handle_phi): Add
svals_changing_meaning param
(region_model_context::maybe_did_work): New pure virtual func.
(region_model_context::checking_for_infinite_loop_p): Likewise.
(region_model_context::on_unusable_in_infinite_loop): Likewise.
(noop_region_model_context::maybe_did_work): Implement.
(noop_region_model_context::checking_for_infinite_loop_p):
Likewise.
(noop_region_model_context::on_unusable_in_infinite_loop):
Likewise.
(region_model_context_decorator::maybe_did_work): Implement.
(region_model_context_decorator::checking_for_infinite_loop_p):
Likewise.
(region_model_context_decorator::on_unusable_in_infinite_loop):
Likewise.
(model_merger::on_widening_reuse): New decl.
(model_merger::m_svals_changing_meaning): New field.
* sm-signal.cc (register_signal_handler::impl_transition): Assume
the edge "does work".
* supergraph.cc (supernode::get_start_location): Use CFG edge's
goto_locus if available.
(supernode::get_end_location): Likewise.
(cfg_superedge::dump_label_to_pp): Dump edges with a "goto_locus"
* supergraph.h (cfg_superedge::get_goto_locus): New.
* svalue.cc (svalue::can_merge_p): Call on_widening_reuse for
widening values.
(involvement_visitor::visit_widening_svalue): New.
(svalue::involves_p): Update assertion to allow widening svalues.
gcc/testsuite/ChangeLog:
PR analyzer/106147
* c-c++-common/analyzer/gzio-2.c: Add dg-warning for infinite
loop, marked as xfail.
* c-c++-common/analyzer/infinite-loop-2.c: New test.
* c-c++-common/analyzer/infinite-loop-4.c: New test.
* c-c++-common/analyzer/infinite-loop-crc32c.c: New test.
* c-c++-common/analyzer/infinite-loop-doom-d_main-IdentifyVersion.c:
New test.
* c-c++-common/analyzer/infinite-loop-doom-v_video.c: New test.
* c-c++-common/analyzer/infinite-loop-g_error.c: New test.
* c-c++-common/analyzer/infinite-loop-linked-list.c: New test.
* c-c++-common/analyzer/infinite-recursion-inlining.c: Add
dg-warning directives for infinite loop.
* c-c++-common/analyzer/inlining-4-multiline.c: Update expected
paths for event 5 having a location.
* gcc.dg/analyzer/boxed-malloc-1.c: Add dg-warning for infinite
loop.
* gcc.dg/analyzer/data-model-20.c: Likewise. Add comment about
suspect code, and create...
* gcc.dg/analyzer/data-model-20a.c: ...this new test by cleaning
it up.
* gcc.dg/analyzer/edges-1.c: Add a placeholder statement to avoid
the "...to here" from the if stmt occurring at the "while", and
thus being treated as a bogus event.
* gcc.dg/analyzer/explode-2a.c: Add dg-warning for infinite loop.
* gcc.dg/analyzer/infinite-loop-1.c: New test.
* gcc.dg/analyzer/malloc-1.c: Add dg-warning for infinite loop.
* gcc.dg/analyzer/out-of-bounds-coreutils.c: Add TODO.
* gcc.dg/analyzer/paths-4.c: Add dg-warning for infinite loop.
* gcc.dg/analyzer/pr103892.c: Likewise.
* gcc.dg/analyzer/pr93546.c: Likewise.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
For conditional operations the mask is loop invariant and cannot be
stored explicitly. By default, for reductions, we deduce the vectype
from the statement or the loop but this does not work for conditional
operations. Therefore this patch passes the truth type of the reduction
input vectype for the mask operand instead. This will override the
other choices and make sure we have the proper mask vectype.
gcc/ChangeLog:
PR middle-end/112406
PR middle-end/112552
* tree-vect-loop.cc (vect_transform_reduction): Pass truth
vectype for mask operand.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/pr112406.c: New test.
* gcc.target/riscv/rvv/autovec/pr112552.c: New test.
This was approved for C++26 last week at the WG21 meeting in Kona.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new header.
* include/Makefile.in: Regenerate.
* include/bits/version.def (saturation_arithmetic): Define.
* include/bits/version.h: Regenerate.
* include/std/numeric: Include new header.
* include/bits/sat_arith.h: New file.
* testsuite/26_numerics/saturation/add.cc: New test.
* testsuite/26_numerics/saturation/cast.cc: New test.
* testsuite/26_numerics/saturation/div.cc: New test.
* testsuite/26_numerics/saturation/mul.cc: New test.
* testsuite/26_numerics/saturation/sub.cc: New test.
* testsuite/26_numerics/saturation/version.cc: New test.
The following patch implements
CWG 2406 - [[fallthrough]] attribute and iteration statements
The genericization of some loops leaves nothing at all or just a label
after a body of a loop, so if the loop is later followed by
case or default label in a switch, the fallthrough statement isn't
diagnosed.
The following patch implements it by marking the IFN_FALLTHROUGH call
in such a case, such that during gimplification it can be pedantically
diagnosed even if it is followed by case or default label or some normal
labels followed by case/default labels.
While looking into this, I've discovered other problems.
expand_FALLTHROUGH_r is removing the IFN_FALLTHROUGH calls from the IL,
but wasn't telling that to walk_gimple_stmt/walk_gimple_seq_mod, so
the callers would then skip the next statement after it, and it would
return non-NULL if the removed stmt was last in the sequence. This could
lead to wi->callback_result being set even if it didn't appear at the very
end of switch sequence.
The patch makes use of wi->removed_stmt such that the callers properly
know what happened, and use different way to handle the end of switch
sequence case.
That change discovered a bug in the gimple-walk handling of
wi->removed_stmt. If that flag is set, the callback is telling the callers
that the current statement has been removed and so the innermost
walk_gimple_seq_mod shouldn't gsi_next. The problem is that
wi->removed_stmt is only reset at the start of a walk_gimple_stmt, but that
can be too late for some cases. If we have two nested gimple sequences,
say GIMPLE_BIND as the last stmt of some gimple seq, we remove the last
statement inside of that GIMPLE_BIND, set wi->removed_stmt there, don't
do gsi_next correctly because already gsi_remove moved us to the next stmt,
there is no next stmt, so we return back to the caller, but wi->removed_stmt
is still set and so we don't do gsi_next even in the outer sequence, despite
the GIMPLE_BIND (etc.) not being removed. That means we walk the
GIMPLE_BIND with its whole sequence again.
The patch fixes that by resetting wi->removed_stmt after we've used that
flag in walk_gimple_seq_mod. Nothing really uses that flag after the
outermost walk_gimple_seq_mod, it is just a private notification that
the stmt callback has removed a stmt.
2023-11-17 Jakub Jelinek <jakub@redhat.com>
PR c++/107571
gcc/
* gimplify.cc (expand_FALLTHROUGH_r): Use wi->removed_stmt after
gsi_remove, change the way of passing fallthrough stmt at the end
of sequence to expand_FALLTHROUGH. Diagnose IFN_FALLTHROUGH
with GF_CALL_NOTHROW flag.
(expand_FALLTHROUGH): Change loc into array of 2 location_t elts,
don't test wi.callback_result, instead check whether first
elt is not UNKNOWN_LOCATION and in that case pedwarn with the
second location.
* gimple-walk.cc (walk_gimple_seq_mod): Clear wi->removed_stmt
after the flag has been used.
* internal-fn.def (FALLTHROUGH): Mention in comment the special
meaning of the TREE_NOTHROW/GF_CALL_NOTHROW flag on the calls.
gcc/c-family/
* c-gimplify.cc (genericize_c_loop): For C++ mark IFN_FALLTHROUGH
call at the end of loop body as TREE_NOTHROW.
gcc/testsuite/
* g++.dg/DRs/dr2406.C: New test.
This is more consistent with the specification in the standard.
libstdc++-v3/ChangeLog:
* include/std/utility (in_range): Rename _Up parameter to _Res.
Improve Doxygen comments for std::out_ptr etc. and add a test for the
feature test macro. Also remove a redundant preprocessor condition.
Ideally the docs for std::out_ptr and std::inout_ptr would show examples
of how to use them and what they do, but that would take some effort.
I'll aim to do that before GCC 14 is released.
libstdc++-v3/ChangeLog:
* include/bits/out_ptr.h: Add Doxygen comments. Remove a
redundant preprocessor condition.
* testsuite/20_util/smartptr.adapt/version.cc: New test.
ctz(ext(X)) is the same as ctz(X) in the UB on zero case (or could be also
in the 2 argument case on large BITINT_TYPE by preserving the argument, not
implemented in this patch),
popcount(zext(X)) is the same as popcount(X),
parity(zext(X)) is the same as parity(X),
parity(sext(X)) is the same as parity(X) provided the bit difference between
the extended and unextended types is even,
ffs(ext(X)) is the same as ffs(X).
The following patch optimizes those in match.pd if those are beneficial
(always in the large BITINT_TYPE case, or if the narrower type has optab
and the wider doesn't, or the wider is larger than word and narrower is
one of the standard argument sizes (tested just int and long long, as
long is on most targets same bitsize as one of those two).
Joseph in the PR mentioned that ctz(narrow(X)) is the same as ctz(X)
if UB on 0, but that can be handled incrementally (and would need different
decisions when it is profitable).
And clz(zext(X)) is clz(X) + bit_difference, but not sure we want to change
that in match.pd at all, perhaps during insn selection?
2023-11-17 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112566
PR tree-optimization/83171
* match.pd (ctz(ext(X)) -> ctz(X), popcount(zext(X)) -> popcount(X),
parity(ext(X)) -> parity(X), ffs(ext(X)) -> ffs(X)): New
simplifications.
( __builtin_ffs (X) == 0 -> X == 0): Use FFS rather than
BUILT_IN_FFS BUILT_IN_FFSL BUILT_IN_FFSLL BUILT_IN_FFSIMAX.
* gcc.dg/pr112566-1.c: New test.
* gcc.dg/pr112566-2.c: New test.
* gcc.target/i386/pr78057.c (foo): Pass another long long argument
and use it in __builtin_ia32_*zcnt_u64 instead of the int one.
As mentioned in the PR, the intent of the r14-5076 changes was that
it doesn't count one of the uses on the use_stmt, but what actually
got implemented is that it does this processing on any op_use_stmt,
even if it is not the use_stmt statement, which means that it
can increase count even on debug stmts (-fcompare-debug failures),
or if there would be some other use stmt with 2+ uses it could count
that as a single use. Though, because it fails whenever cnt != 1
and I believe use_stmt must be one of the uses, it would probably
fail in the latter case anyway.
The following patch fixes that by doing this extra processing only when
op_use_stmt is use_stmt, and using the normal processing otherwise
(so ignore debug stmts, and increase on any uses on the stmt).
2023-11-17 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112374
* tree-vect-loop.cc (check_reduction_path): Perform the cond_fn_p
special case only if op_use_stmt == use_stmt, use as_a rather than
dyn_cast in that case.
* gcc.dg/pr112374-1.c: New test.
* gcc.dg/pr112374-2.c: New test.
* g++.dg/opt/pr112374.C: New test.
The offending commit r14-5444-g5ea2965b499f9e was reverted. The
following adds a testcase.
PR tree-optimization/112585
* gcc.dg/torture/pr112585.c: New testcase.
This patch accepts -std=f2023, uses it by default and bumps for the
free-source form the line length to 10,000 and the statement length
alias number of continuation lines to unlimited.
gcc/fortran/ChangeLog:
* gfortran.texi (_gfortran_set_options): Document GFC_STD_F2023.
* invoke.texi (std,pedantic,Wampersand,Wtabs): Add -std=2023.
* lang.opt (std=f2023): Add.
* libgfortran.h (GFC_STD_F2023, GFC_STD_OPT_F23): Add.
* options.cc (set_default_std_flags): Add GFC_STD_F2023.
(gfc_init_options): Set max_continue_free to 1,000,000.
(gfc_post_options): Set flag_free_line_length if unset.
(gfc_handle_option): Add OPT_std_f2023, set max_continue_free = 255
for -std=f2003, f2008 and f2018.
gcc/testsuite/ChangeLog:
* gfortran.dg/goacc/warn_truncated.f90: Add -std=f2018 option.
* gfortran.dg/gomp/warn_truncated.f90: Likewise.
* gfortran.dg/line_length_10.f90: Likewise.
* gfortran.dg/line_length_11.f90: Likewise.
* gfortran.dg/line_length_2.f90: Likewise.
* gfortran.dg/line_length_5.f90: Likewise.
* gfortran.dg/line_length_6.f90: Likewise.
* gfortran.dg/line_length_7.f90: Likewise.
* gfortran.dg/line_length_8.f90: Likewise.
* gfortran.dg/line_length_9.f90: Likewise.
* gfortran.dg/continuation_17.f90: New test.
* gfortran.dg/continuation_18.f90: New test.
* gfortran.dg/continuation_19.f: New test.
* gfortran.dg/line_length_12.f90: New test.
* gfortran.dg/line_length_13.f90: New test.
gcc/
PR target/53372
* config/avr/avr.cc (avr_asm_named_section) [AVR_SECTION_PROGMEM]:
Only return some .progmem*.data section if the user did not
specify a section attribute.
(avr_section_type_flags) [avr_progmem_p]: Unset SECTION_NOTYPE
in returned section flags.
gcc/testsuite/
PR target/53372
* gcc.target/avr/pr53372-1.c: New test.
* gcc.target/avr/pr53372-2.c: New test.
We introduced in commit a0673ec5f9
some noisy messages, which clutter output with things like:
dg set al ...
revised FFLAGS ...
and are not really useful information. Let's remove them.
gcc/testsuite/ChangeLog:
* gfortran.dg/coarray/caf.exp: Remove some output.
* gfortran.dg/dg.exp: Remove some output.
With LSX or LASX, copysign (x[i], -1) (or any negative constant) can be
vectorized using [x]vbitseti.{w/d} instructions to directly set the
signbits.
Inspired by Tamar Christina's "AArch64: Handle copysign (x, -1) expansion
efficiently" (r14-5289).
gcc/ChangeLog:
* config/loongarch/lsx.md (copysign<mode>3): Allow operand[2] to
be an reg_or_vector_same_val_operand. If it's a const vector
with same negative elements, expand the copysign with a bitset
instruction. Otherwise, force it into an register.
* config/loongarch/lasx.md (copysign<mode>3): Likewise.
gcc/testsuite/ChangeLog:
* g++.target/loongarch/vect-copysign-negconst.C: New test.
* g++.target/loongarch/vect-copysign-negconst-run.C: New test.
The previous patch enables 16-byte by pieces move. Originally 16-byte
move is implemented via pattern. expand_block_move does an optimization
on P8 LE to leverage V2DI reversed load/store for memory to memory move.
Now 16-byte move is implemented via by pieces move and finally split to
two DI load/store. This patch creates an insn_and_split pattern to
retake the optimization.
gcc/
PR target/111449
* config/rs6000/vsx.md (*vsx_le_mem_to_mem_mov_ti): New.
gcc/testsuite/
PR target/111449
* gcc.target/powerpc/pr111449-2.c: New.
This patch adds a new expand pattern - cbranchv16qi4 to enable vector
mode by pieces equality compare on rs6000. The macro MOVE_MAX_PIECES
(COMPARE_MAX_PIECES) is set to 16 bytes when EFFICIENT_UNALIGNED_VSX
is enabled, otherwise keeps unchanged. The macro STORE_MAX_PIECES is
set to the same value as MOVE_MAX_PIECES by default, so now it's
explicitly defined and keeps unchanged.
gcc/
PR target/111449
* config/rs6000/altivec.md (cbranchv16qi4): New expand pattern.
* config/rs6000/rs6000.cc (rs6000_generate_compare): Generate
insn sequence for V16QImode equality compare.
* config/rs6000/rs6000.h (MOVE_MAX_PIECES): Define.
(STORE_MAX_PIECES): Define.
gcc/testsuite/
PR target/111449
* gcc.target/powerpc/pr111449-1.c: New.
* gcc.dg/tree-ssa/sra-17.c: Add additional options for 32-bit powerpc.
* gcc.dg/tree-ssa/sra-18.c: Likewise.
The LoongArch has defined ctz and clz on the backend, but if we want GCC
do CTZ transformation optimization in forwprop2 pass, GCC need to know
the value of c[lt]z at zero, which may be beneficial for some test cases
(like spec2017 deepsjeng_r).
After implementing the macro, we test dynamic instruction count on
deepsjeng_r:
- before 1688423249186
- after 1660311215745 (1.66% reduction)
gcc/ChangeLog:
* config/loongarch/loongarch.h (CLZ_DEFINED_VALUE_AT_ZERO):
Implement.
(CTZ_DEFINED_VALUE_AT_ZERO): Same.
gcc/testsuite/ChangeLog:
* gcc.dg/pr90838.c: add clz/ctz test support on LoongArch.
We have a support case that shows GCC 7 sometimes creates
DW_TAG_label refering to itself via a DW_AT_abstract_origin
when using LTO. This for example triggers the sanity check
added below during LTO bootstrap.
Making this check cover more than just DW_AT_abstract_origin
breaks bootstrap on trunk for
/* GNU extension: Record what type our vtable lives in. */
if (TYPE_VFIELD (type))
{
tree vtype = DECL_FCONTEXT (TYPE_VFIELD (type));
gen_type_die (vtype, context_die);
add_AT_die_ref (type_die, DW_AT_containing_type,
lookup_type_die (vtype));
so the check is for now restricted to DW_AT_abstract_origin
and DW_AT_specification both of which we follow within get_AT.
* dwarf2out.cc (add_AT_die_ref): Assert we do not add
a self-ref DW_AT_abstract_origin or DW_AT_specification.
Based on SPEC2017 performance evaluation results, it's better to make them equal
to the cost of unaligned store/load so as to avoid odd alignment peeling.
gcc/ChangeLog:
* config/loongarch/loongarch.cc
(loongarch_builtin_vectorization_cost): Adjust.