hiyouga
|
94d5b1bd8f
|
add e2e tests
|
2024-09-05 21:52:28 +08:00 |
hiyouga
|
359ef8bb0e
|
support Yi-Coder models
|
2024-09-05 03:12:24 +08:00 |
hiyouga
|
57497135bf
|
add vl_feedback dataset
|
2024-09-04 03:13:03 +08:00 |
hiyouga
|
194064fdae
|
add pokemon dataset
|
2024-09-02 01:02:25 +08:00 |
hiyouga
|
a8f8a2ad8a
|
update readme
|
2024-09-01 23:32:39 +08:00 |
hiyouga
|
753c5fb36c
|
update wechat
|
2024-09-01 23:30:57 +08:00 |
hiyouga
|
8e49940746
|
add rlhf-v dataset
|
2024-09-01 22:57:41 +08:00 |
hiyouga
|
e08045a946
|
add examples
|
2024-08-30 21:43:19 +08:00 |
hiyouga
|
3382317e32
|
refactor mm training
|
2024-08-30 02:14:31 +08:00 |
hiyouga
|
d14edd350d
|
add extra requires
|
2024-08-27 12:52:12 +08:00 |
hiyouga
|
72bc8f0111
|
support liger kernel
|
2024-08-27 11:20:14 +08:00 |
hiyouga
|
3804ddec9e
|
update readme
|
2024-08-19 23:32:04 +08:00 |
codingma
|
625a0e32c4
|
add tutorial and doc links
|
2024-08-13 16:13:10 +08:00 |
hiyouga
|
c93d55bfb0
|
update readme
|
2024-08-10 10:17:35 +08:00 |
hiyouga
|
576a894f77
|
update readme
|
2024-08-09 20:46:02 +08:00 |
hiyouga
|
c75b5b83c4
|
add magpie ultra dataset
|
2024-08-09 20:28:55 +08:00 |
hiyouga
|
dc770efb14
|
add qwen2 math models
|
2024-08-09 20:20:35 +08:00 |
hiyouga
|
e2a28f51c6
|
add adam_mini to readme
|
2024-08-09 20:02:03 +08:00 |
hiyouga
|
86f7099fa3
|
update scripts
|
2024-08-09 19:16:23 +08:00 |
hiyouga
|
b7ca6c8dc1
|
fix #5048
|
2024-08-05 23:48:19 +08:00 |
hoshi-hiyouga
|
3a49c76b65
|
Update README_zh.md
|
2024-07-30 01:55:13 +08:00 |
liudan
|
b9ed9d45cc
|
增加了MiniCPM在页面首页的支持列表,MiniCPM官方github也放了LLama_factory的友情链接
|
2024-07-29 10:58:28 +08:00 |
hiyouga
|
668654b5ad
|
tiny fix
|
2024-07-26 11:51:00 +08:00 |
hoshi-hiyouga
|
77e7bfee79
|
Update README_zh.md
|
2024-07-26 11:30:57 +08:00 |
khazic
|
ceba96f9ed
|
Added the reference address for TRL PPO details.
|
2024-07-25 09:03:21 +08:00 |
hiyouga
|
77cff78863
|
fix #4959
|
2024-07-24 23:44:00 +08:00 |
hoshi-hiyouga
|
71d3e60713
|
Update README_zh.md
|
2024-07-24 21:08:42 +08:00 |
hiyouga
|
26533c0604
|
add llama3.1
|
2024-07-24 16:20:11 +08:00 |
hiyouga
|
87346c0946
|
update readme
|
2024-07-03 19:39:05 +08:00 |
wangzhihong
|
6f8f53f879
|
Update README_zh.md
|
2024-07-03 14:59:09 +08:00 |
hiyouga
|
d4e2af1fa4
|
update readme
|
2024-07-01 00:22:52 +08:00 |
hiyouga
|
d74244d568
|
fix #4398 #4592
|
2024-06-30 21:28:51 +08:00 |
hiyouga
|
0e0d69b77c
|
update readme
|
2024-06-28 06:55:19 +08:00 |
hiyouga
|
6f63050e1b
|
add Gemma2 models
|
2024-06-28 01:26:50 +08:00 |
hiyouga
|
e44a4f07f0
|
tiny fix
|
2024-06-27 20:14:48 +08:00 |
hoshi-hiyouga
|
64b131dcfa
|
Merge pull request #4461 from hzhaoy/feature/support-flash-attn
support flash-attn in Dockerfile
|
2024-06-27 20:05:26 +08:00 |
hiyouga
|
ad144c2265
|
support HQQ/EETQ #4113
|
2024-06-27 00:29:42 +08:00 |
hzhaoy
|
e19491b0f0
|
add flash-attn installation flag in Dockerfile
|
2024-06-27 00:13:30 +08:00 |
hiyouga
|
efb81b25ec
|
fix #4419
|
2024-06-25 01:51:29 +08:00 |
hiyouga
|
41086059b1
|
tiny fix
|
2024-06-25 01:15:19 +08:00 |
hoshi-hiyouga
|
ec95f942d1
|
Update README_zh.md
|
2024-06-25 01:06:59 +08:00 |
MengqingCao
|
d7207e8ad1
|
update docker files
1. add docker-npu (Dockerfile and docker-compose.yml)
2. move cuda docker to docker-cuda and tiny changes to adapt to the new path
|
2024-06-24 10:57:36 +00:00 |
hiyouga
|
4ea84a8333
|
update readme
|
2024-06-24 18:29:04 +08:00 |
hiyouga
|
e507e60638
|
update readme
|
2024-06-24 18:22:12 +08:00 |
hiyouga
|
344b9a36b2
|
tiny fix
|
2024-06-18 23:32:18 +08:00 |
hoshi-hiyouga
|
10316dd8ca
|
Merge pull request #4309 from EliMCosta/patch-1
Add Magpie and Webinstruct dataset samples
|
2024-06-18 23:30:19 +08:00 |
hiyouga
|
a233fbc258
|
add deepseek coder v2 #4346
|
2024-06-18 22:53:54 +08:00 |
hiyouga
|
fcb2e8e7b7
|
update readme
|
2024-06-17 18:47:24 +08:00 |
Eli Costa
|
3ec57ac239
|
Update README_zh.md
Fix details tag in datasets menus
|
2024-06-16 11:34:31 -03:00 |
Eli Costa
|
82d5c5c1e8
|
Update README_zh.md
Add Magpie and WebInstruct to README
|
2024-06-16 11:22:06 -03:00 |