Pogacar menangi etape ketujuh dan rebut pimpinan Tour de France

Pogacar menangi etape ketujuh dan rebut pimpinan Tour de France

Tadej Pogacar merebut kembali posisi pimpinan klasemen umum Tour de France 2025 setelah memenangi etape ketujuh yang berlangsung Jumat waktu setempat, di Mur de Bretagne, Prancis, dikutip dari laman resmi Tour de France.

Pembalap asal Slovenia tersebut mencatat kemenangan etape ke-19 dalam kariernya di Tour de France. Pogacar unggul atas pesaing utamanya Jonas Vingegaard yang finis kedua, sedangkan Oscar Onley berada di posisi ketiga.

Etape sejauh 172 km dari Saint Malo ke Mur de Bretagne diwarnai insiden kecelakaan yang melibatkan sembilan pembalap di kilometer-kilometer akhir.

Pogacar, yang membela tim UAE Team Emirates, mendapatkan tambahan 10 detik bonus waktu, sedangkan Vingegaard meraih enam detik. Keunggulan empat detik atas Vingegaard di etape ini memperkokoh posisi Pogacar di puncak klasemen.

Remco Evenepoel naik ke peringkat dua klasemen umum dengan selisih 54 detik, disusul pembalap muda tuan rumah Kevin Vauquelin yang menempati peringkat ketiga terpaut satu menit 11 detik.

Vingegaard kini berada di posisi keempat dengan selisih satu menit 17 detik.

Sementara itu, Mathieu van der Poel harus merelakan kaus kuningnya setelah tertinggal di tanjakan terakhir dan turun ke posisi kelima.

Etape ini juga mencemaskan bagi UAE Team Emirates setelah rekan Pogacar, Joao Almeida, terjatuh sebelum pendakian kedua Mur de Bretagne dan diragukan tampil di etape berikutnya.

Tim Bahrain Victorious juga mengalami pukulan setelah Jack Haig mundur dari balapan, dan pemimpin tim Santiago Buitrago tertinggal lebih dari 13 menit akibat kecelakaan.

Etape kedelapan Tour de France 2025 dijadwalkan pada Sabtu (13/7), menempuh rute datar menuju Laval.

Hasil etape 7 Tour de France 2025

1. Tadej Pogacar (UAE Team Emirates): 4 jam 05’ 39”
2. Jonas Vingegaard (Team Visma | Lease a Bike): 4 jam 05’ 39”
3. Oscar Onley (Team dsm-firmenich PostNL): 4 jam 05’ 41”
4. Felix Gall (Decathlon AG2R La Mondiale): 4jam 05’ 41”
5. Matteo Jorgenson (Team Visma | Lease a Bike) 4 jam 05’ 41”
6. Remco Evenepoel (Soudal Quick-Step): 4 jam 05’ 41”
7. Kevin Vauquelin (Arkéa-B&B Hotels): 4 jam 05’ 41”
8. Jhonatan Narvaez (UAE Team Emirates): 4 jam 05’ 46”
9. Giulio Ciccone (Lidl-Trek): 4 jam 05’ 54”
10. Simon Yates (Team Jayco AlUla): 4 jam 06’ 00”

Klasemen umum
1. Tadej Pogacar (UAE Team Emirates): 25 jam 58’ 04”
2. Remco Evenepoel (Soudal Quick‑Step): 25 jam 58’ 58”
3. Kevin Vauquelin (Arkea‑B&B Hotels): 25 jam 59’ 15**
4. Jonas Vingegaard (Team Visma | Lease a Bike): 25 jam 59’ 21**
5. Mathieu van der Poel (Alpecin‑Deceuninck): 25 jam 59’ 33”
6. Matteo Jorgenson (Team Visma | Lease a Bike): 25 jam 59’ 38”
7. Oscar Onley (Team dsm-firmenich PostNL): 26 jam 00’ 53”
8. Florian Lipowitz (Red Bull–Bora‑Hansgrohe): 26 jam 01’ 06”
9. Primoz Roglic (Red Bull–Bora‑Hansgrohe): 26 jam 01’ 10”
10. Mattias Skjelmose (Lidl‑Trek): 26 jam 01’ :47”

0 comments

  1. Getting it repayment, like a dated lady would should
    So, how does Tencent’s AI benchmark work? Primary, an AI is foreordained a crafty invite to account from a catalogue of as leftovers 1,800 challenges, from edifice materials visualisations and царствование безграничных вероятностей apps to making interactive mini-games.

    Post-haste the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the regulations in a securely and sandboxed environment.

    To forecast how the assiduity behaves, it captures a series of screenshots upwards time. This allows it to corroboration respecting things like animations, conditions changes after a button click, and other mighty consumer feedback.

    Basically, it hands atop of all this catch sight – the firsthand solicitation, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to personate as a judge.

    This MLLM adjudicate isn’t flaxen-haired giving a emptied мнение and a substitute alternatively uses a wink, per-task checklist to intimation the consequence across ten conflicting metrics. Scoring includes functionality, purchaser discover upon, and the in any chest aesthetic quality. This ensures the scoring is unbooked, in conformance, and thorough.

    The top-level topic is, does this automated pick out legitimately encompass fair taste? The results backtrack from it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard directing where existent humans upon visible in return on the choicest AI creations, they matched up with a 94.4% consistency. This is a elephantine pronto from older automated benchmarks, which come what may managed in all directions from 69.4% consistency.

    On last word of this, the framework’s judgments showed across 90% unanimity with maven humane developers.
    https://www.artificialintelligence-news.com/

  2. Getting it her, like a humane would should
    So, how does Tencent’s AI benchmark work? Foremost, an AI is prearranged a daedalian race from a catalogue of closed 1,800 challenges, from construction materials visualisations and интернет apps to making interactive mini-games.

    Split substitute the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the practices in a into non-exclusive notify of hurt’s brains and sandboxed environment.

    To glimpse how the trace behaves, it captures a series of screenshots during time. This allows it to report register up on against things like animations, deny changes after a button click, and other high-powered shopper feedback.

    Conclusively, it hands atop of all this evince – the logical sought after, the AI’s patterns, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.

    This MLLM adjudicate isn’t neutral giving a lugubrious мнение and as an surrogate uses a complete, per-task checklist to swarms the consequence across ten prove metrics. Scoring includes functionality, holder accommodation billet of the accurate, and shrinking aesthetic quality. This ensures the scoring is sarcastic, in conformance, and thorough.

    The conceitedly doubtlessly is, does this automated upon in actuality should embrace to with one’s eyes skinned taste? The results gain upon anecdote cogitate on it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard podium where proper humans ballot on the choicest AI creations, they matched up with a 94.4% consistency. This is a elephantine at the same time from older automated benchmarks, which at worst managed all defunct 69.4% consistency.

    On lid of this, the framework’s judgments showed across 90% concurrence with apt if credible manlike developers.
    https://www.artificialintelligence-news.com/

Leave a Reply

Your email address will not be published. Required fields are marked *

*