0.1.7.post1 #1526
LeiWang1999
announced in
Announcements
0.1.7.post1
#1526
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What's Changed
T.make_tensornot on the top of prim_func by @LeiWang1999 in [Bugfix] AllocT.make_tensornot on the top of prim_func #1412T.__ldgby @LeiWang1999 in [Enhancement] IntroduceT.__ldg#1414compile_flagsto ffi compilation path with pass_configs by @LeiWang1999 in [Bugfix] Conveycompile_flagsto ffi compilation path with pass_configs #1434example/dsa_sparse_finetune/indexer_topk_reducesum.py#1442 by @kurisu6912 in [Fix] Fix analyzer bind conflicting bug in #1442 #1446pytest.mark.parameterizeto speedup parallel testing by @kurisu6912 in [Refactor] Usepytest.mark.parameterizeto speedup parallel testing #1447T.annotate_restrict_buffersby @LeiWang1999 in [Language] IntroduceT.annotate_restrict_buffers#1428test_tilelang_language_rand.pyby @silentCoder-dev in [Refactor] Rename test for curand & add triton baseline intest_tilelang_language_rand.py#1464kDisableDynamicTailSplitandkDynamicAlignmentas they are legacy by @LeiWang1999 in [Refactor] Phaseout PassConfigkDisableDynamicTailSplitandkDynamicAlignmentas they are legacy #1486alloc_localstatement in examples and introduce processing for floating fragment buffers by @LeiWang1999 in [Refactor] Phaseout legacyalloc_localstatement in examples and introduce processing for floating fragment buffers #1495ctypesby @LeiWang1999 in [Refactor] Phaseout execution_backendctypes#1510TargetIsCudafor all cuda target by @oraluben in UseTargetIsCudafor all cuda target #1522New Contributors
Full Changelog: v0.1.7...v0.1.7.post1
This discussion was created from the release 0.1.7.post1.
Beta Was this translation helpful? Give feedback.
All reactions