Commit Graph

5937 Commits

Author SHA1 Message Date
Oli Scherer
60956837cf s/Generator/Coroutine/ 2023-10-20 21:10:38 +00:00
Ralf Jung
98d54da1ee document that the null pointer has the 0 address 2023-10-20 19:10:20 +02:00
ltdk
b9c2d0e4ab Fix typo in atomic docs 2023-10-20 00:57:29 -04:00
León Orell Valerian Liehr
80c9588549 Rollup merge of #116795 - DaniPopes:track-caller-option, r=cuviper
Add `#[track_caller]` to `Option::unwrap_or_else`

Same as #116317 but for `Option`.

Closes #115302
2023-10-19 04:34:46 +02:00
Joshua Liebow-Feeser
3fea7cc7da Guarantee that char has the same size and alignment as u32 2023-10-18 09:14:31 -07:00
Slanterns
10e6372a83 Stabilize result_option_inspect 2023-10-18 07:35:23 +08:00
Oli Scherer
bcdd3d7739 Disable effects in libcore again 2023-10-17 17:55:49 +00:00
bors
93e62a260f Auto merge of #115577 - RalfJung:atomic-load, r=Amanieu
document when atomic loads are guaranteed read-only

Based on this [discussion in Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/Can.20.60Atomic*.3A.3Aload.60.20perform.20a.20write).

The values for x86 and x86_64 are complete guesswork on my side, and I have no clue what the values might be for other architectures. I hope we can get the right people to chime in to gather the required information. :)

I'll update Miri to respect these rules once we have more data.
2023-10-17 14:11:31 +00:00
Ralf Jung
e494df436d remove 128bit atomics, they are anyway not exposed on those targets 2023-10-17 07:56:49 +02:00
Nilstrieb
414135d522 Make rustc_onunimplemented export path agnostic
This makes it so that all the matchers that match against paths use the
definition path instead of the export path. This removes all duplication
around `std`/`alloc`/`core`.

This is not necessarily optimal because we now depend on internal
implementation details like `core::ops::control_flow::ControlFlow`,
which is not very nice and probably not acceptable for a stable
`on_unimplemented`.

An alternative would be to just string-replace normalize away
`alloc`/`core` to `std` as a special case, keeping the export paths but
making it so that we're still fully standard library flavor agnostic.
2023-10-16 19:37:12 +02:00
Ralf Jung
6605116463 use target-arch based table 2023-10-16 19:29:16 +02:00
DaniPopes
0df670fb67 Add #[track_caller] to Option::unwrap_or_else 2023-10-16 15:17:15 +02:00
Matthias Krüger
17113f7db6 Rollup merge of #115955 - tgross35:ip-to-canonical, r=dtolnay
Stabilize `{IpAddr, Ipv6Addr}::to_canonical`

Make `IpAddr::to_canonical` and `IpV6Addr::to_canonical` stable (+const), as well as const stabilize `Ipv6Addr::to_ipv4_mapped`.

Newly stable API:

```rust
impl IpAddr {
    // Newly stable under `ip_to_canonical`
    const fn to_canonical(&self) -> IpAddr;
}

impl Ipv6Addr {
    // Newly stable under `ip_to_canonical`
    const fn to_canonical(&self) -> IpAddr;

    // Already stable, this makes it const stable under
    // `const_ipv6_to_ipv4_mapped`
    const fn to_ipv4_mapped(&self) -> Option<Ipv4Addr>
}
```

These stabilize a subset of the following tracking issues:

- https://github.com/rust-lang/rust/issues/27709
- https://github.com/rust-lang/rust/issues/76205

Stabilization of all methods under the `ip` gate was attempted once at https://github.com/rust-lang/rust/pull/66584 then again at https://github.com/rust-lang/rust/pull/76098. These were not successful because there are still unknowns about `is_documentation` `is_benchmarking` and similar; `to_canonical` is much more straightforward.

I have looked and could not find any known issues with `to_canonical`. These were added in 2021 in https://github.com/rust-lang/rust/pull/87708

cc implementor ``@the8472``

r? libs-api
``@rustbot`` label +T-libs-api +needs-fcp
2023-10-16 06:26:20 +02:00
bors
30d310cc1f Auto merge of #113747 - clarfonthey:ip_bitops, r=dtolnay
impl Not, Bit{And,Or}{,Assign} for IP addresses

ACP: rust-lang/libs-team#235

Note: since these are insta-stable, these require an FCP.

Implements, where `N` is either `4` or `6`:

```rust
impl Not for IpvNAddr
impl Not for &IpvNAddr

impl BitAnd<IpvNAddr> for IpvNAddr
impl BitAnd<&IpvNAddr> for IpvNAddr
impl BitAnd<IpvNAddr> for &IpvNAddr
impl BitAnd<&IpvNAddr> for &IpvNAddr

impl BitAndAssign<IpvNAddr> for IpvNAddr
impl BitAndAssign<&IpvNAddr> for IpvNAddr

impl BitOr<IpvNAddr> for IpvNAddr
impl BitOr<&IpvNAddr> for IpvNAddr
impl BitOr<IpvNAddr> for &IpvNAddr
impl BitOr<&IpvNAddr> for &IpvNAddr

impl BitOrAssign<IpvNAddr> for IpvNAddr
impl BitOrAssign<&IpvNAddr> for IpvNAddr
```
2023-10-15 23:05:06 +00:00
Matthias Krüger
32da83d338 Rollup merge of #116760 - Nilstrieb:triviality, r=oli-obk
Remove trivial cast in `guaranteed_eq`

I found this while accidentally breaking trivial casts in another branch.

r? oli-obk
2023-10-15 21:29:09 +02:00
bors
64368d0279 Auto merge of #110729 - ColinFinck:decode-utf16-fused-iterator, r=dtolnay
Implement FusedIterator for DecodeUtf16 when the inner iterator does

I have just implemented an iterator that wraps `DecodeUtf16` and wanted to implement `FusedIterator` for my iterator when I noticed that `DecodeUtf16` currently doesn't implement `FusedIterator` at all.
A quick look at the code of `DecodeUtf16` revealed that `DecodeUtf16::next` only returns `None` when its inner iterator returns `None`:
3462f79e94/library/core/src/char/decode.rs (L45)

As a result, we can implement `FusedIterator` for `DecodeUtf16` when the inner iterator does.

I'm following the example of #96397 here and consider this change minor and non-controversial, which is why I haven't added an RFC. I have also added the required feature name (`"decode_utf16_fused_iterator"`), however without adding a chapter to the Rust Unstable book (same as #96397).
2023-10-15 17:09:37 +00:00
Ralf Jung
9d8506d27f acquire loads can be done as relaxed load; acquire fence 2023-10-15 17:41:50 +02:00
Ralf Jung
9b8686d832 only guarantee for Relaxed; add ptr-size fallback 2023-10-15 17:41:50 +02:00
Ralf Jung
275d5c8251 wording 2023-10-15 17:41:50 +02:00
Ralf Jung
69b62ecc69 define 'read-only memory' 2023-10-15 17:41:50 +02:00
Ralf Jung
07b8c10ed8 add general powerpc64le bound
(some powerpc64le targets can guarantee more, but for now it doesn't seem worth separating by OS/vendor)
2023-10-15 17:41:50 +02:00
Ralf Jung
7453235feb add ARM and RISC-V values 2023-10-15 17:41:50 +02:00
Ralf Jung
b5e67a00d9 document when atomic loads are guaranteed read-only 2023-10-15 17:41:50 +02:00
Nilstrieb
fe9d422e7b Remove trivial cast in guaranteed_eq
I found this while accidentally breaking trivial casts in another
branch.
2023-10-15 12:33:44 +02:00
Matthias Krüger
e86e6b45e7 Rollup merge of #116594 - tae-soo-kim:convert-tryfrom-doc, r=scottmcm
Fix `std::convert::TryFrom` doc

Original text:

> truncating the [i64](https://doc.rust-lang.org/std/primitive.i64.html) to an [i32](https://doc.rust-lang.org/std/primitive.i32.html) (essentially giving the [i64](https://doc.rust-lang.org/std/primitive.i64.html)’s value modulo [i32::MAX](https://doc.rust-lang.org/std/primitive.i32.html#associatedconstant.MAX))

This can't be true, because `i32::MAX` is an odd number. The correct value seems `(i32::MAX + 1) * 2`, but this is complicated and distracting, and I suggest removing the parentheses entirely.
2023-10-15 11:37:23 +02:00
bors
0d410be23c Auto merge of #115515 - the8472:zip-for-arrays, r=scottmcm
optimize zipping over array iterators

Fixes #115339 (somewhat)

the new assembly:

```asm
zip_arrays:
        .cfi_startproc
        vmovups (%rdx), %ymm0
        leaq    32(%rsi), %rcx
        vxorps  %xmm1, %xmm1, %xmm1
        vmovups %xmm1, -24(%rsp)
        movq    $0, -8(%rsp)
        movq    %rsi, -88(%rsp)
        movq    %rdi, %rax
        movq    %rcx, -80(%rsp)
        vmovups %ymm0, -72(%rsp)
        movq    $0, -40(%rsp)
        movq    $32, -32(%rsp)
        movq    -24(%rsp), %rcx
        vmovups (%rsi,%rcx), %ymm0
        vorps   -72(%rsp,%rcx), %ymm0, %ymm0
        vmovups %ymm0, (%rsi,%rcx)
        vmovups (%rsi), %ymm0
        vmovups %ymm0, (%rdi)
        vzeroupper
        retq
```

This is still longer than the slice version given in the issue but at least it eliminates the terrible  `vpextrb`/`orb` chain. I guess this is due to excessive memcpys again (haven't looked at the llvmir)?

The `TrustedLen` specialization is a drive-by change since I had to do something for the default impl anyway to be able to specialize the `TrustedRandomAccessNoCoerce` impl.
2023-10-15 00:49:21 +00:00
Guillaume Gomez
fcd75ccc90 Rollup merge of #116540 - daxpedda:once-cell-lock-try-insert, r=Mark-Simulacrum
Implement `OnceCell/Lock::try_insert()`

I took inspiration from [`once_cell`](https://crates.io/crates/once_cell):
- [`once_cell::unsync::OnceCell::try_insert()`](874f9373ab/src/lib.rs (L551-L563))
- [`once_cell::sync::OnceCell::try_insert()`](874f9373ab/src/lib.rs (L1080-L1087))

I tried to change as little code as possible in the first commit and applied some obvious optimizations in the second one.

ACP: https://github.com/rust-lang/libs-team/issues/276
Tracking issue: #116693
2023-10-14 22:35:05 +02:00
Matthias Krüger
3899957086 Rollup merge of #115653 - joshlf:patch-9, r=dtolnay
Guarantee that Layout::align returns a non-zero power of two
2023-10-14 13:48:18 +02:00
bors
39acbed8d6 Auto merge of #116407 - Mark-Simulacrum:bootstrap-bump, r=onur-ozkan
Bump bootstrap compiler to just-released beta

https://forge.rust-lang.org/release/process.html#master-bootstrap-update-t-2-day-tuesday
2023-10-14 05:44:48 +00:00
Joshua Liebow-Feeser
9703cb2deb Guarantee representation of None in NPO 2023-10-14 04:41:17 +00:00
bors
2a7c2df506 Auto merge of #115719 - tgross35:atomic-from-ptr, r=dtolnay
Stabilize `atomic_from_ptr`

This stabilizes `atomic_from_ptr` and moves the const gate to `const_atomic_from_ptr`. Const stability is blocked on `const_mut_refs`.

Tracking issue:  #108652

Newly stable API:

```rust
// core::atomic

impl AtomicBool { pub unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool; }

impl<T> AtomicPtr<T> { pub unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T>; }

impl AtomicU8    { pub unsafe fn from_ptr<'a>(ptr: *mut u8)    -> &'a AtomicU8;    }
impl AtomicU16   { pub unsafe fn from_ptr<'a>(ptr: *mut u16)   -> &'a AtomicU16;   }
impl AtomicU32   { pub unsafe fn from_ptr<'a>(ptr: *mut u32)   -> &'a AtomicU32;   }
impl AtomicU64   { pub unsafe fn from_ptr<'a>(ptr: *mut u64)   -> &'a AtomicU64;   }
impl AtomicUsize { pub unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a AtomicUsize; }

impl AtomicI8    { pub unsafe fn from_ptr<'a>(ptr: *mut i8)    -> &'a AtomicI8;    }
impl AtomicI16   { pub unsafe fn from_ptr<'a>(ptr: *mut i16)   -> &'a AtomicI16;   }
impl AtomicI32   { pub unsafe fn from_ptr<'a>(ptr: *mut i32)   -> &'a AtomicI32;   }
impl AtomicI64   { pub unsafe fn from_ptr<'a>(ptr: *mut i64)   -> &'a AtomicI64;   }
impl AtomicIsize { pub unsafe fn from_ptr<'a>(ptr: *mut isize) -> &'a AtomicIsize; }
```
2023-10-14 02:45:21 +00:00
Maybe Waffle
963131e99c Derive Ord, PartialOrd and Hash for SocketAddr*
...instead of hand rolling impls, since
1. It's nicer
2. It fixes a buggy `Ord` impl of `SocketAddrV6`, which ignored half of the fields
2023-10-14 00:48:22 +00:00
Trevor Gross
227c844b16 Stabilize 'atomic_from_ptr', move const gate to 'const_atomic_from_ptr' 2023-10-13 16:10:33 -04:00
Trevor Gross
3209d2d46e Correct documentation for atomic_from_ptr
* Remove duplicate alignment note that mentioned `AtomicBool` with other
  types
* Update safety requirements about when non-atomic operations are
  allowed
2023-10-13 16:10:29 -04:00
bors
57ef889852 Auto merge of #116233 - DaniPopes:stabilize-const_maybe_uninit_assume_init_read, r=dtolnay
Stabilize `const_maybe_uninit_assume_init_read`

AFAICT the only reason this was not included in the `maybe_uninit_extra` stabilization was because `ptr::read` was unstable (https://github.com/rust-lang/rust/pull/92768#issuecomment-1011101383), which has since been stabilized in 1.71.

Needs a separate FCP from the [original `maybe_uninit_extra` one](https://github.com/rust-lang/rust/issues/63567#issuecomment-964428807).

Tracking issue: #63567
2023-10-13 17:11:03 +00:00
Joshua Liebow-Feeser
55487e235b Update primitive_docs.rs 2023-10-13 09:49:23 -07:00
Joshua Liebow-Feeser
39660c4a77 Update library/core/src/primitive_docs.rs
Co-authored-by: Ralf Jung <post@ralfj.de>
2023-10-13 09:47:39 -07:00
daxpedda
dd34d9027a Add some optimizations 2023-10-13 14:54:33 +02:00
daxpedda
6db2587999 Implement OnceCell/Lock::try_insert() 2023-10-13 14:54:32 +02:00
ltdk
91405ab74a Clean up unchecked_math, separate out unchecked_shifts 2023-10-13 02:17:08 -04:00
ltdk
6b13950978 Remove Not for IpAddr 2023-10-13 02:15:19 -04:00
ltdk
46bb49acb5 impl Not, Bit{And,Or,Xor}{,Assign} for IP addresses 2023-10-13 02:15:19 -04:00
Joshua Liebow-Feeser
4f0192a756 Update primitive_docs.rs 2023-10-12 18:55:45 -07:00
Joshua Liebow-Feeser
a9b0966aa5 Update library/core/src/alloc/layout.rs
Co-authored-by: David Tolnay <dtolnay@gmail.com>
2023-10-12 16:03:45 -07:00
Joshua Liebow-Feeser
a20866254c References refer to allocated objects 2023-10-12 15:35:03 -07:00
bors
156da98b29 Auto merge of #112818 - Benjamin-L:add-slice_split_once, r=cuviper
Implement `slice::split_once` and `slice::rsplit_once`

Feature gate is `slice_split_once` and tracking issue is #112811. These are equivalents to the existing `str::split_once` and `str::rsplit_once` methods.
2023-10-11 08:19:13 +00:00
tae-soo-kim
e15e9a673e Update mod.rs 2023-10-10 07:05:25 +00:00
Michael Howell
c6e6ecb1af rustdoc: remove rust logo from non-Rust crates 2023-10-08 20:17:53 -07:00
Mark Rousskov
ea1066d0be Bump to latest beta 2023-10-08 19:57:43 -04:00
bors
598e29bf70 Auto merge of #100806 - timvermeulen:split_inclusive_double_ended_bound, r=dtolnay
Fix generic bound of `str::SplitInclusive`'s `DoubleEndedIterator` impl

`str::SplitInclusive`'s `DoubleEndedIterator` implementation currently uses a `ReverseSearcher` bound for the corresponding searcher. A `DoubleEndedSearcher` bound should have been used instead.

`DoubleEndedIterator` requires that repeated `next_back` calls produce the same items as repeated `next` calls, in opposite order. `ReverseSearcher` lets you search starting from the back of a string, but it makes no guarantees about how its matches correspond to the matches found by a forward search. `DoubleEndedSearcher` is a subtrait of `ReverseSearcher` and does require that the same matches are found in both directions.

This bug fix is a breaking change. Calling `next_back` on `"a+++b".split_inclusive("++")` is currently accepted with repeated calls producing `"b"` and `"a+++"`, while forward iteration yields `"a++"` and `"+b"`. Also see https://github.com/rust-lang/rust/issues/100756#issuecomment-1221307166 for more details.

I believe that this is the only iterator that uses this bound incorrectly — other related iterators such as `str::Split` do have a `DoubleEndedSearcher` bound for their `DoubleEndedIterator` implementation. And `slice::SplitInclusive` doesn't face this problem at all because it doesn't use patterns, only a predicate.

cc `@SkiFire13`
2023-10-07 17:10:02 +00:00