tensorflow/third_party/xla
Shaogang Wang 733d71db88
Some checks are pending
ARM CI / build (3.10) (push) Waiting to run
Creates a GitHub Issue when a PR Rolled back via Commit to Master / create-issue-on-pr-rollback (push) Waiting to run
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
PR #19528: [XLA:GPU] use separte command buffer cmd flag for conditional and loop
Imported from GitHub PR https://github.com/openxla/xla/pull/19528

Observed in saxml workload that sharing the same command buffer cmd type (CONDITIONALS) for WHILE and CONDITIONAL command over kill the lowering opportunities.

Many cases could allow CONDITIONAL instruction to lower into command buffer, while WHILE is not possible.

This PR uses separate command buffer cmd type flag for CONDITIONAL and WHILE instructions when user specifies the type to lowering.
Copybara import of the project:

--
4d62fb512995e2fc6e9077a1b3251a6754c866ca by Shawn Wang <shawnw@nvidia.com>:

use separte command buffer cmd flag for conditional and loop

Merging this change closes #19528

PiperOrigin-RevId: 698729891
2024-11-21 04:47:24 -08:00
..
.github Temporarily exclude xla/tsl from buildifier checks 2024-11-15 13:31:40 -08:00
.kokoro
build_tools
docs [XLA:GPU][IndexAnalysis] Update documentation for indexing maps. 2024-11-18 14:31:30 -08:00
third_party Move tsl/platform/{cloud,default,windows} to xla/tsl/platform 2024-11-20 18:15:47 -08:00
tools
xla PR #19528: [XLA:GPU] use separte command buffer cmd flag for conditional and loop 2024-11-21 04:47:24 -08:00
.bazelrc
.bazelversion
.clang-format
.clang-tidy
.gitignore
BUILD.bazel
CONTRIBUTING.md
LICENSE
opensource_only.files [xla:cpu] Add initial implementation of NanoRt backends for XLA:CPU 2024-11-20 10:08:12 -08:00
README.md
requirements_lock_3_11.txt
warnings.bazelrc Update XLA's warnings.bazelrc 2024-11-18 16:04:04 -08:00
workspace0.bzl
workspace1.bzl
workspace2.bzl
workspace3.bzl
workspace4.bzl
WORKSPACE

XLA

XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.

OpenXLA Ecosystem

The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.

Get started

If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:

If you're not contributing code to the XLA compiler, you don't need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.

Contribute

If you'd like to contribute to XLA, review How to Contribute and then see the developer guide.

Contacts

  • For questions, contact the maintainers - maintainers at openxla.org

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.