Pandas 2.2 中文官方教程和指南(二十二)

embedded/2024/10/18 16:51:21/

原文:pandas.pydata.org/docs/

时间增量

原文:pandas.pydata.org/docs/user_guide/timedeltas.html

时间增量是时间之间的差异,以不同的单位表示,例如天、小时、分钟、秒。它们可以是正数也可以是负数。

Timedeltadatetime.timedelta的子类,并且行为类似,但也允许与np.timedelta64类型兼容,以及一系列自定义表示、解析和属性。

解析

您可以通过各种参数构造一个Timedelta标量,包括ISO 8601 Duration字符串。

In [1]: import datetime# strings
In [2]: pd.Timedelta("1 days")
Out[2]: Timedelta('1 days 00:00:00')In [3]: pd.Timedelta("1 days 00:00:00")
Out[3]: Timedelta('1 days 00:00:00')In [4]: pd.Timedelta("1 days 2 hours")
Out[4]: Timedelta('1 days 02:00:00')In [5]: pd.Timedelta("-1 days 2 min 3us")
Out[5]: Timedelta('-2 days +23:57:59.999997')# like datetime.timedelta
# note: these MUST be specified as keyword arguments
In [6]: pd.Timedelta(days=1, seconds=1)
Out[6]: Timedelta('1 days 00:00:01')# integers with a unit
In [7]: pd.Timedelta(1, unit="d")
Out[7]: Timedelta('1 days 00:00:00')# from a datetime.timedelta/np.timedelta64
In [8]: pd.Timedelta(datetime.timedelta(days=1, seconds=1))
Out[8]: Timedelta('1 days 00:00:01')In [9]: pd.Timedelta(np.timedelta64(1, "ms"))
Out[9]: Timedelta('0 days 00:00:00.001000')# negative Timedeltas have this string repr
# to be more consistent with datetime.timedelta conventions
In [10]: pd.Timedelta("-1us")
Out[10]: Timedelta('-1 days +23:59:59.999999')# a NaT
In [11]: pd.Timedelta("nan")
Out[11]: NaTIn [12]: pd.Timedelta("nat")
Out[12]: NaT# ISO 8601 Duration strings
In [13]: pd.Timedelta("P0DT0H1M0S")
Out[13]: Timedelta('0 days 00:01:00')In [14]: pd.Timedelta("P0DT0H0M0.000000123S")
Out[14]: Timedelta('0 days 00:00:00.000000123') 

日期偏移(Day, Hour, Minute, Second, Milli, Micro, Nano)也可以用于构建。

In [15]: pd.Timedelta(pd.offsets.Second(2))
Out[15]: Timedelta('0 days 00:00:02') 

此外,标量与标量之间的操作将产生另一个标量Timedelta

In [16]: pd.Timedelta(pd.offsets.Day(2)) + pd.Timedelta(pd.offsets.Second(2)) + pd.Timedelta(....:    "00:00:00.000123"....: )....: 
Out[16]: Timedelta('2 days 00:00:02.000123') 

to_timedelta

使用顶级的 pd.to_timedelta,您可以将识别的时间增量格式/值的标量、数组、列表或序列转换为 Timedelta 类型。如果输入是序列,则将构造序列,如果输入类似于标量,则将输出标量,否则将输出 TimedeltaIndex

您可以将单个字符串解析为一个时间增量:

In [17]: pd.to_timedelta("1 days 06:05:01.00003")
Out[17]: Timedelta('1 days 06:05:01.000030')In [18]: pd.to_timedelta("15.5us")
Out[18]: Timedelta('0 days 00:00:00.000015500') 

或者一个字符串的列表/数组:

In [19]: pd.to_timedelta(["1 days 06:05:01.00003", "15.5us", "nan"])
Out[19]: TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015500', NaT], dtype='timedelta64[ns]', freq=None) 

unit 关键字参数指定了 Timedelta 的单位,如果输入是数字的话:

In [20]: pd.to_timedelta(np.arange(5), unit="s")
Out[20]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02','0 days 00:00:03', '0 days 00:00:04'],dtype='timedelta64[ns]', freq=None)In [21]: pd.to_timedelta(np.arange(5), unit="d")
Out[21]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None) 

警告

如果将字符串或字符串数组作为输入传递,则将忽略unit关键字参数。如果传递没有单位的字符串,则假定默认单位为纳秒。

时间增量的限制

pandas 使用 64 位整数以纳秒分辨率表示Timedeltas。因此,64 位整数限制确定了Timedelta的限制。

In [22]: pd.Timedelta.min
Out[22]: Timedelta('-106752 days +00:12:43.145224193')In [23]: pd.Timedelta.max
Out[23]: Timedelta('106751 days 23:47:16.854775807') 
```## 操作您可以对序列/数据框进行操作,并通过在`datetime64[ns]`序列或`Timestamps`上执行减法操作来构建`timedelta64[ns]`序列。```py
In [24]: s = pd.Series(pd.date_range("2012-1-1", periods=3, freq="D"))In [25]: td = pd.Series([pd.Timedelta(days=i) for i in range(3)])In [26]: df = pd.DataFrame({"A": s, "B": td})In [27]: df
Out[27]: A      B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 daysIn [28]: df["C"] = df["A"] + df["B"]In [29]: df
Out[29]: A      B          C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05In [30]: df.dtypes
Out[30]: 
A     datetime64[ns]
B    timedelta64[ns]
C     datetime64[ns]
dtype: objectIn [31]: s - s.max()
Out[31]: 
0   -2 days
1   -1 days
2    0 days
dtype: timedelta64[ns]In [32]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[32]: 
0   364 days 20:55:00
1   365 days 20:55:00
2   366 days 20:55:00
dtype: timedelta64[ns]In [33]: s + datetime.timedelta(minutes=5)
Out[33]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns]In [34]: s + pd.offsets.Minute(5)
Out[34]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns]In [35]: s + pd.offsets.Minute(5) + pd.offsets.Milli(5)
Out[35]: 
0   2012-01-01 00:05:00.005
1   2012-01-02 00:05:00.005
2   2012-01-03 00:05:00.005
dtype: datetime64[ns] 

使用timedelta64[ns]序列的标量进行操作:

In [36]: y = s - s[0]In [37]: y
Out[37]: 
0   0 days
1   1 days
2   2 days
dtype: timedelta64[ns] 

支持带有NaT值的时间增量序列:

In [38]: y = s - s.shift()In [39]: y
Out[39]: 
0      NaT
1   1 days
2   1 days
dtype: timedelta64[ns] 

可以使用np.nan将元素设置为NaT,类似于日期时间:

In [40]: y[1] = np.nanIn [41]: y
Out[41]: 
0      NaT
1      NaT
2   1 days
dtype: timedelta64[ns] 

操作数也可以以相反的顺序出现(一个对象与一个序列进行操作):

In [42]: s.max() - s
Out[42]: 
0   2 days
1   1 days
2   0 days
dtype: timedelta64[ns]In [43]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[43]: 
0   -365 days +03:05:00
1   -366 days +03:05:00
2   -367 days +03:05:00
dtype: timedelta64[ns]In [44]: datetime.timedelta(minutes=5) + s
Out[44]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns] 

min, max 和相应的 idxmin, idxmax 操作也适用于框架:

In [45]: A = s - pd.Timestamp("20120101") - pd.Timedelta("00:05:05")In [46]: B = s - pd.Series(pd.date_range("2012-1-2", periods=3, freq="D"))In [47]: df = pd.DataFrame({"A": A, "B": B})In [48]: df
Out[48]: A       B
0 -1 days +23:54:55 -1 days
1   0 days 23:54:55 -1 days
2   1 days 23:54:55 -1 daysIn [49]: df.min()
Out[49]: 
A   -1 days +23:54:55
B   -1 days +00:00:00
dtype: timedelta64[ns]In [50]: df.min(axis=1)
Out[50]: 
0   -1 days
1   -1 days
2   -1 days
dtype: timedelta64[ns]In [51]: df.idxmin()
Out[51]: 
A    0
B    0
dtype: int64In [52]: df.idxmax()
Out[52]: 
A    2
B    0
dtype: int64 

min, max, idxmin, idxmax 操作也适用于序列。标量结果将是一个Timedelta

In [53]: df.min().max()
Out[53]: Timedelta('-1 days +23:54:55')In [54]: df.min(axis=1).min()
Out[54]: Timedelta('-1 days +00:00:00')In [55]: df.min().idxmax()
Out[55]: 'A'In [56]: df.min(axis=1).idxmin()
Out[56]: 0 

您可以对时间增量进行填充,传递一个时间增量以获得特定值。

In [57]: y.fillna(pd.Timedelta(0))
Out[57]: 
0   0 days
1   0 days
2   1 days
dtype: timedelta64[ns]In [58]: y.fillna(pd.Timedelta(10, unit="s"))
Out[58]: 
0   0 days 00:00:10
1   0 days 00:00:10
2   1 days 00:00:00
dtype: timedelta64[ns]In [59]: y.fillna(pd.Timedelta("-1 days, 00:00:05"))
Out[59]: 
0   -1 days +00:00:05
1   -1 days +00:00:05
2     1 days 00:00:00
dtype: timedelta64[ns] 

您还可以对Timedeltas进行取反、乘法和使用abs

In [60]: td1 = pd.Timedelta("-1 days 2 hours 3 seconds")In [61]: td1
Out[61]: Timedelta('-2 days +21:59:57')In [62]: -1 * td1
Out[62]: Timedelta('1 days 02:00:03')In [63]: -td1
Out[63]: Timedelta('1 days 02:00:03')In [64]: abs(td1)
Out[64]: Timedelta('1 days 02:00:03') 
```## 缩减`timedelta64[ns]`的数值缩减操作将返回`Timedelta`对象。通常在评估过程中会跳过`NaT`。```py
In [65]: y2 = pd.Series(....:    pd.to_timedelta(["-1 days +00:00:05", "nat", "-1 days +00:00:05", "1 days"])....: )....: In [66]: y2
Out[66]: 
0   -1 days +00:00:05
1                 NaT
2   -1 days +00:00:05
3     1 days 00:00:00
dtype: timedelta64[ns]In [67]: y2.mean()
Out[67]: Timedelta('-1 days +16:00:03.333333334')In [68]: y2.median()
Out[68]: Timedelta('-1 days +00:00:05')In [69]: y2.quantile(0.1)
Out[69]: Timedelta('-1 days +00:00:05')In [70]: y2.sum()
Out[70]: Timedelta('-1 days +00:00:10') 
```## 频率转换时间增量序列和 `TimedeltaIndex`,以及 `Timedelta` 可以通过转换为特定的时间增量数据类型来转换为其他频率。```py
In [71]: december = pd.Series(pd.date_range("20121201", periods=4))In [72]: january = pd.Series(pd.date_range("20130101", periods=4))In [73]: td = january - decemberIn [74]: td[2] += datetime.timedelta(minutes=5, seconds=3)In [75]: td[3] = np.nanIn [76]: td
Out[76]: 
0   31 days 00:00:00
1   31 days 00:00:00
2   31 days 00:05:03
3                NaT
dtype: timedelta64[ns]# to seconds
In [77]: td.astype("timedelta64[s]")
Out[77]: 
0   31 days 00:00:00
1   31 days 00:00:00
2   31 days 00:05:03
3                NaT
dtype: timedelta64[s] 

对于除了支持的“s”、“ms”、“us”、“ns”之外的 timedelta64 分辨率,另一种方法是除以另一个 timedelta 对象。请注意,通过 NumPy 标量进行的除法是真除法,而 astyping 等同于 floor division。

# to days
In [78]: td / np.timedelta64(1, "D")
Out[78]: 
0    31.000000
1    31.000000
2    31.003507
3          NaN
dtype: float64 

timedelta64[ns] Series 除以整数或整数系列,或者乘以整数,会得到另一个timedelta64[ns] dtypes Series。

In [79]: td * -1
Out[79]: 
0   -31 days +00:00:00
1   -31 days +00:00:00
2   -32 days +23:54:57
3                  NaT
dtype: timedelta64[ns]In [80]: td * pd.Series([1, 2, 3, 4])
Out[80]: 
0   31 days 00:00:00
1   62 days 00:00:00
2   93 days 00:15:09
3                NaT
dtype: timedelta64[ns] 

timedelta64[ns] Series 进行四舍五入除法(floor-division)得到一个整数系列。

In [81]: td // pd.Timedelta(days=3, hours=4)
Out[81]: 
0    9.0
1    9.0
2    9.0
3    NaN
dtype: float64In [82]: pd.Timedelta(days=3, hours=4) // td
Out[82]: 
0    0.0
1    0.0
2    0.0
3    NaN
dtype: float64 

当与另一个类似timedelta或数值参数操作时,Timedelta定义了 mod(%)和 divmod 操作。

In [83]: pd.Timedelta(hours=37) % datetime.timedelta(hours=2)
Out[83]: Timedelta('0 days 01:00:00')# divmod against a timedelta-like returns a pair (int, Timedelta)
In [84]: divmod(datetime.timedelta(hours=2), pd.Timedelta(minutes=11))
Out[84]: (10, Timedelta('0 days 00:10:00'))# divmod against a numeric returns a pair (Timedelta, Timedelta)
In [85]: divmod(pd.Timedelta(hours=25), 86400000000000)
Out[85]: (Timedelta('0 days 00:00:00.000000001'), Timedelta('0 days 01:00:00')) 

属性

你可以直接使用days,seconds,microseconds,nanoseconds属性访问TimedeltaTimedeltaIndex的各个组件。这些与datetime.timedelta返回的值相同,例如,.seconds属性表示大于等于 0 且小于 1 天的秒数。这些值根据Timedelta是否有符号而有所不同。

这些操作也可以通过Series.dt属性直接访问。

注意

注意,属性不是Timedelta的显示值。使用.components来检索显示值。

对于Series

In [86]: td.dt.days
Out[86]: 
0    31.0
1    31.0
2    31.0
3     NaN
dtype: float64In [87]: td.dt.seconds
Out[87]: 
0      0.0
1      0.0
2    303.0
3      NaN
dtype: float64 

你可以直接访问标量Timedelta的字段值。

In [88]: tds = pd.Timedelta("31 days 5 min 3 sec")In [89]: tds.days
Out[89]: 31In [90]: tds.seconds
Out[90]: 303In [91]: (-tds).seconds
Out[91]: 86097 

你可以使用.components属性访问时间间隔的简化形式。这将返回一个类似于Series的索引的DataFrame。这些是Timedelta显示值。

In [92]: td.dt.components
Out[92]: days  hours  minutes  seconds  milliseconds  microseconds  nanoseconds
0  31.0    0.0      0.0      0.0           0.0           0.0          0.0
1  31.0    0.0      0.0      0.0           0.0           0.0          0.0
2  31.0    0.0      5.0      3.0           0.0           0.0          0.0
3   NaN    NaN      NaN      NaN           NaN           NaN          NaNIn [93]: td.dt.components.seconds
Out[93]: 
0    0.0
1    0.0
2    3.0
3    NaN
Name: seconds, dtype: float64 

你可以使用.isoformat方法将Timedelta转换为ISO 8601 Duration字符串。

In [94]: pd.Timedelta(....:    days=6, minutes=50, seconds=3, milliseconds=10, microseconds=10, nanoseconds=12....: ).isoformat()....: 
Out[94]: 'P6DT0H50M3.010010012S' 

TimedeltaIndex

要生成带有时间间隔的索引,可以使用TimedeltaIndextimedelta_range()构造函数。

使用TimedeltaIndex可以传递类似字符串、Timedeltatimedeltanp.timedelta64的对象。传递np.nan/pd.NaT/nat将表示缺失值。

In [95]: pd.TimedeltaIndex(....:    [....:        "1 days",....:        "1 days, 00:00:05",....:        np.timedelta64(2, "D"),....:        datetime.timedelta(days=2, seconds=2),....:    ]....: )....: 
Out[95]: 
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00','2 days 00:00:02'],dtype='timedelta64[ns]', freq=None) 

字符串‘infer’可以传递以将索引的频率设置为创建时推断的频率:

In [96]: pd.TimedeltaIndex(["0 days", "10 days", "20 days"], freq="infer")
Out[96]: TimedeltaIndex(['0 days', '10 days', '20 days'], dtype='timedelta64[ns]', freq='10D') 

生成时间间隔范围

类似于date_range(),你可以使用timedelta_range()构造TimedeltaIndex的常规范围。timedelta_range的默认频率是日历日:

In [97]: pd.timedelta_range(start="1 days", periods=5)
Out[97]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D') 

可以使用timedelta_rangestartendperiods的各种组合:

In [98]: pd.timedelta_range(start="1 days", end="5 days")
Out[98]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D')In [99]: pd.timedelta_range(end="10 days", periods=4)
Out[99]: TimedeltaIndex(['7 days', '8 days', '9 days', '10 days'], dtype='timedelta64[ns]', freq='D') 

freq参数可以传递各种 frequency aliases:

In [100]: pd.timedelta_range(start="1 days", end="2 days", freq="30min")
Out[100]: 
TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00','1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00','1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00','1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00','1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00','1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00','1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00','1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00','1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00','1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00','1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00','1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00','1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00','1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00','1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00','1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00','2 days 00:00:00'],dtype='timedelta64[ns]', freq='30min')In [101]: pd.timedelta_range(start="1 days", periods=5, freq="2D5h")
Out[101]: 
TimedeltaIndex(['1 days 00:00:00', '3 days 05:00:00', '5 days 10:00:00','7 days 15:00:00', '9 days 20:00:00'],dtype='timedelta64[ns]', freq='53h') 

指定 startendperiods 将生成一系列从 startend 的等间隔 timedeltas,其中结果 TimedeltaIndex 中的元素数为 periods

In [102]: pd.timedelta_range("0 days", "4 days", periods=5)
Out[102]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None)In [103]: pd.timedelta_range("0 days", "4 days", periods=10)
Out[103]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 10:40:00', '0 days 21:20:00','1 days 08:00:00', '1 days 18:40:00', '2 days 05:20:00','2 days 16:00:00', '3 days 02:40:00', '3 days 13:20:00','4 days 00:00:00'],dtype='timedelta64[ns]', freq=None) 

使用 TimedeltaIndex

类似于其他日期时间索引,DatetimeIndexPeriodIndex,你可以将 TimedeltaIndex 用作 pandas 对象的索引。

In [104]: s = pd.Series(.....:    np.arange(100),.....:    index=pd.timedelta_range("1 days", periods=100, freq="h"),.....: ).....: In [105]: s
Out[105]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
4 days 23:00:00    95
5 days 00:00:00    96
5 days 01:00:00    97
5 days 02:00:00    98
5 days 03:00:00    99
Freq: h, Length: 100, dtype: int64 

选择方式类似,对于字符串样式和切片都会进行强制转换:

In [106]: s["1 day":"2 day"]
Out[106]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
2 days 19:00:00    43
2 days 20:00:00    44
2 days 21:00:00    45
2 days 22:00:00    46
2 days 23:00:00    47
Freq: h, Length: 48, dtype: int64In [107]: s["1 day 01:00:00"]
Out[107]: 1In [108]: s[pd.Timedelta("1 day 1h")]
Out[108]: 1 

此外,你可以使用部分字符串选择,范围将被推断:

In [109]: s["1 day":"1 day 5 hours"]
Out[109]: 
1 days 00:00:00    0
1 days 01:00:00    1
1 days 02:00:00    2
1 days 03:00:00    3
1 days 04:00:00    4
1 days 05:00:00    5
Freq: h, dtype: int64 

操作

最后,TimedeltaIndexDatetimeIndex 的组合允许保留某些组合操作的 NaT:

In [110]: tdi = pd.TimedeltaIndex(["1 days", pd.NaT, "2 days"])In [111]: tdi.to_list()
Out[111]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]In [112]: dti = pd.date_range("20130101", periods=3)In [113]: dti.to_list()
Out[113]: 
[Timestamp('2013-01-01 00:00:00'),Timestamp('2013-01-02 00:00:00'),Timestamp('2013-01-03 00:00:00')]In [114]: (dti + tdi).to_list()
Out[114]: [Timestamp('2013-01-02 00:00:00'), NaT, Timestamp('2013-01-05 00:00:00')]In [115]: (dti - tdi).to_list()
Out[115]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2013-01-01 00:00:00')] 

转换

类似于上面对Series进行的频率转换,你可以将这些索引转换为另一个索引。

In [116]: tdi / np.timedelta64(1, "s")
Out[116]: Index([86400.0, nan, 172800.0], dtype='float64')In [117]: tdi.astype("timedelta64[s]")
Out[117]: TimedeltaIndex(['1 days', NaT, '2 days'], dtype='timedelta64[s]', freq=None) 

标量类型操作也有效。这些可能返回一个不同类型的索引。

# adding or timedelta and date -> datelike
In [118]: tdi + pd.Timestamp("20130101")
Out[118]: DatetimeIndex(['2013-01-02', 'NaT', '2013-01-03'], dtype='datetime64[ns]', freq=None)# subtraction of a date and a timedelta -> datelike
# note that trying to subtract a date from a Timedelta will raise an exception
In [119]: (pd.Timestamp("20130101") - tdi).to_list()
Out[119]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2012-12-30 00:00:00')]# timedelta + timedelta -> timedelta
In [120]: tdi + pd.Timedelta("10 days")
Out[120]: TimedeltaIndex(['11 days', NaT, '12 days'], dtype='timedelta64[ns]', freq=None)# division can result in a Timedelta if the divisor is an integer
In [121]: tdi / 2
Out[121]: TimedeltaIndex(['0 days 12:00:00', NaT, '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)# or a float64 Index if the divisor is a Timedelta
In [122]: tdi / tdi[0]
Out[122]: Index([1.0, nan, 2.0], dtype='float64') 
```## 重新采样类似于时间序列重采样,我们可以使用 `TimedeltaIndex` 进行重新采样。```py
In [123]: s.resample("D").mean()
Out[123]: 
1 days    11.5
2 days    35.5
3 days    59.5
4 days    83.5
5 days    97.5
Freq: D, dtype: float64 

解析

你可以通过各种参数构造一个 Timedelta 标量,包括ISO 8601 Duration字符串。

In [1]: import datetime# strings
In [2]: pd.Timedelta("1 days")
Out[2]: Timedelta('1 days 00:00:00')In [3]: pd.Timedelta("1 days 00:00:00")
Out[3]: Timedelta('1 days 00:00:00')In [4]: pd.Timedelta("1 days 2 hours")
Out[4]: Timedelta('1 days 02:00:00')In [5]: pd.Timedelta("-1 days 2 min 3us")
Out[5]: Timedelta('-2 days +23:57:59.999997')# like datetime.timedelta
# note: these MUST be specified as keyword arguments
In [6]: pd.Timedelta(days=1, seconds=1)
Out[6]: Timedelta('1 days 00:00:01')# integers with a unit
In [7]: pd.Timedelta(1, unit="d")
Out[7]: Timedelta('1 days 00:00:00')# from a datetime.timedelta/np.timedelta64
In [8]: pd.Timedelta(datetime.timedelta(days=1, seconds=1))
Out[8]: Timedelta('1 days 00:00:01')In [9]: pd.Timedelta(np.timedelta64(1, "ms"))
Out[9]: Timedelta('0 days 00:00:00.001000')# negative Timedeltas have this string repr
# to be more consistent with datetime.timedelta conventions
In [10]: pd.Timedelta("-1us")
Out[10]: Timedelta('-1 days +23:59:59.999999')# a NaT
In [11]: pd.Timedelta("nan")
Out[11]: NaTIn [12]: pd.Timedelta("nat")
Out[12]: NaT# ISO 8601 Duration strings
In [13]: pd.Timedelta("P0DT0H1M0S")
Out[13]: Timedelta('0 days 00:01:00')In [14]: pd.Timedelta("P0DT0H0M0.000000123S")
Out[14]: Timedelta('0 days 00:00:00.000000123') 

DateOffsets(Day, Hour, Minute, Second, Milli, Micro, Nano)也可以在构造中使用。

In [15]: pd.Timedelta(pd.offsets.Second(2))
Out[15]: Timedelta('0 days 00:00:02') 

此外,标量之间的操作会产生另一个标量 Timedelta

In [16]: pd.Timedelta(pd.offsets.Day(2)) + pd.Timedelta(pd.offsets.Second(2)) + pd.Timedelta(....:    "00:00:00.000123"....: )....: 
Out[16]: Timedelta('2 days 00:00:02.000123') 

to_timedelta

使用顶层的 pd.to_timedelta,你可以将一个被识别的时间差格式/值的标量、数组、列表或 Series 转换为 Timedelta 类型。如果输入是 Series,则会构造 Series;如果输入类似于标量,则会构造标量,否则将输出一个 TimedeltaIndex

你可以将单个字符串解析为 Timedelta:

In [17]: pd.to_timedelta("1 days 06:05:01.00003")
Out[17]: Timedelta('1 days 06:05:01.000030')In [18]: pd.to_timedelta("15.5us")
Out[18]: Timedelta('0 days 00:00:00.000015500') 

或者一个字符串列表/数组:

In [19]: pd.to_timedelta(["1 days 06:05:01.00003", "15.5us", "nan"])
Out[19]: TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015500', NaT], dtype='timedelta64[ns]', freq=None) 

如果输入是数字,则unit关键字参数指定 Timedelta 的单位:

In [20]: pd.to_timedelta(np.arange(5), unit="s")
Out[20]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02','0 days 00:00:03', '0 days 00:00:04'],dtype='timedelta64[ns]', freq=None)In [21]: pd.to_timedelta(np.arange(5), unit="d")
Out[21]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None) 

警告

如果作为输入传递了字符串或字符串数组,则unit关键字参数将被忽略。如果传递的是没有单位的字符串,则假定为默认单位为纳秒。

Timedelta 的限制

pandas 使用 64 位整数以纳秒分辨率表示 Timedeltas。因此,64 位整数的限制确定了 Timedelta 的限制。

In [22]: pd.Timedelta.min
Out[22]: Timedelta('-106752 days +00:12:43.145224193')In [23]: pd.Timedelta.max
Out[23]: Timedelta('106751 days 23:47:16.854775807') 

to_timedelta

使用顶层的 pd.to_timedelta,你可以将一个被识别的时间差格式/值的标量、数组、列表或 Series 转换为 Timedelta 类型。如果输入是 Series,则会构造 Series;如果输入类似于标量,则会构造标量,否则将输出一个 TimedeltaIndex

你可以将单个字符串解析为 Timedelta:

In [17]: pd.to_timedelta("1 days 06:05:01.00003")
Out[17]: Timedelta('1 days 06:05:01.000030')In [18]: pd.to_timedelta("15.5us")
Out[18]: Timedelta('0 days 00:00:00.000015500') 

或者一个字符串列表/数组:

In [19]: pd.to_timedelta(["1 days 06:05:01.00003", "15.5us", "nan"])
Out[19]: TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015500', NaT], dtype='timedelta64[ns]', freq=None) 

如果输入是数字,则unit关键字参数指定 Timedelta 的单位:

In [20]: pd.to_timedelta(np.arange(5), unit="s")
Out[20]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02','0 days 00:00:03', '0 days 00:00:04'],dtype='timedelta64[ns]', freq=None)In [21]: pd.to_timedelta(np.arange(5), unit="d")
Out[21]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None) 

警告

如果作为输入传递了字符串或字符串数组,则unit关键字参数将被忽略。如果传递的是没有单位的字符串,则假定为默认单位为纳秒。

Timedelta 的限制

pandas 使用 64 位整数以纳秒分辨率表示 Timedeltas。因此,64 位整数限制确定了 Timedelta 的限制。

In [22]: pd.Timedelta.min
Out[22]: Timedelta('-106752 days +00:12:43.145224193')In [23]: pd.Timedelta.max
Out[23]: Timedelta('106751 days 23:47:16.854775807') 

操作

您可以对 Series/DataFrames 进行操作,并通过减法操作在 datetime64[ns] Series 或 Timestamps 上构建 timedelta64[ns] Series。

In [24]: s = pd.Series(pd.date_range("2012-1-1", periods=3, freq="D"))In [25]: td = pd.Series([pd.Timedelta(days=i) for i in range(3)])In [26]: df = pd.DataFrame({"A": s, "B": td})In [27]: df
Out[27]: A      B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 daysIn [28]: df["C"] = df["A"] + df["B"]In [29]: df
Out[29]: A      B          C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05In [30]: df.dtypes
Out[30]: 
A     datetime64[ns]
B    timedelta64[ns]
C     datetime64[ns]
dtype: objectIn [31]: s - s.max()
Out[31]: 
0   -2 days
1   -1 days
2    0 days
dtype: timedelta64[ns]In [32]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[32]: 
0   364 days 20:55:00
1   365 days 20:55:00
2   366 days 20:55:00
dtype: timedelta64[ns]In [33]: s + datetime.timedelta(minutes=5)
Out[33]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns]In [34]: s + pd.offsets.Minute(5)
Out[34]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns]In [35]: s + pd.offsets.Minute(5) + pd.offsets.Milli(5)
Out[35]: 
0   2012-01-01 00:05:00.005
1   2012-01-02 00:05:00.005
2   2012-01-03 00:05:00.005
dtype: datetime64[ns] 

timedelta64[ns] Series 中的标量进行操作:

In [36]: y = s - s[0]In [37]: y
Out[37]: 
0   0 days
1   1 days
2   2 days
dtype: timedelta64[ns] 

支持具有 NaT 值的时间增量 Series:

In [38]: y = s - s.shift()In [39]: y
Out[39]: 
0      NaT
1   1 days
2   1 days
dtype: timedelta64[ns] 

使用 np.nan 类似于日期时间可以将元素设置为 NaT

In [40]: y[1] = np.nanIn [41]: y
Out[41]: 
0      NaT
1      NaT
2   1 days
dtype: timedelta64[ns] 

操作数也可以以相反的顺序出现(一个对象与 Series 进行操作):

In [42]: s.max() - s
Out[42]: 
0   2 days
1   1 days
2   0 days
dtype: timedelta64[ns]In [43]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[43]: 
0   -365 days +03:05:00
1   -366 days +03:05:00
2   -367 days +03:05:00
dtype: timedelta64[ns]In [44]: datetime.timedelta(minutes=5) + s
Out[44]: 
0   2012-01-01 00:05:00
1   2012-01-02 00:05:00
2   2012-01-03 00:05:00
dtype: datetime64[ns] 

在 frames 上支持 min, max 和相应的 idxmin, idxmax 操作:

In [45]: A = s - pd.Timestamp("20120101") - pd.Timedelta("00:05:05")In [46]: B = s - pd.Series(pd.date_range("2012-1-2", periods=3, freq="D"))In [47]: df = pd.DataFrame({"A": A, "B": B})In [48]: df
Out[48]: A       B
0 -1 days +23:54:55 -1 days
1   0 days 23:54:55 -1 days
2   1 days 23:54:55 -1 daysIn [49]: df.min()
Out[49]: 
A   -1 days +23:54:55
B   -1 days +00:00:00
dtype: timedelta64[ns]In [50]: df.min(axis=1)
Out[50]: 
0   -1 days
1   -1 days
2   -1 days
dtype: timedelta64[ns]In [51]: df.idxmin()
Out[51]: 
A    0
B    0
dtype: int64In [52]: df.idxmax()
Out[52]: 
A    2
B    0
dtype: int64 

min, max, idxmin, idxmax 操作也支持在 Series 上。标量结果将是一个 Timedelta

In [53]: df.min().max()
Out[53]: Timedelta('-1 days +23:54:55')In [54]: df.min(axis=1).min()
Out[54]: Timedelta('-1 days +00:00:00')In [55]: df.min().idxmax()
Out[55]: 'A'In [56]: df.min(axis=1).idxmin()
Out[56]: 0 

您可以在 timedeltas 上使用 fillna,传递一个 timedelta 以获取特定值。

In [57]: y.fillna(pd.Timedelta(0))
Out[57]: 
0   0 days
1   0 days
2   1 days
dtype: timedelta64[ns]In [58]: y.fillna(pd.Timedelta(10, unit="s"))
Out[58]: 
0   0 days 00:00:10
1   0 days 00:00:10
2   1 days 00:00:00
dtype: timedelta64[ns]In [59]: y.fillna(pd.Timedelta("-1 days, 00:00:05"))
Out[59]: 
0   -1 days +00:00:05
1   -1 days +00:00:05
2     1 days 00:00:00
dtype: timedelta64[ns] 

您还可以对 Timedeltas 进行取反、乘法和使用 abs

In [60]: td1 = pd.Timedelta("-1 days 2 hours 3 seconds")In [61]: td1
Out[61]: Timedelta('-2 days +21:59:57')In [62]: -1 * td1
Out[62]: Timedelta('1 days 02:00:03')In [63]: -td1
Out[63]: Timedelta('1 days 02:00:03')In [64]: abs(td1)
Out[64]: Timedelta('1 days 02:00:03') 

缩减

对于 timedelta64[ns] 的数值缩减操作将返回 Timedelta 对象。通常在评估过程中跳过 NaT

In [65]: y2 = pd.Series(....:    pd.to_timedelta(["-1 days +00:00:05", "nat", "-1 days +00:00:05", "1 days"])....: )....: In [66]: y2
Out[66]: 
0   -1 days +00:00:05
1                 NaT
2   -1 days +00:00:05
3     1 days 00:00:00
dtype: timedelta64[ns]In [67]: y2.mean()
Out[67]: Timedelta('-1 days +16:00:03.333333334')In [68]: y2.median()
Out[68]: Timedelta('-1 days +00:00:05')In [69]: y2.quantile(0.1)
Out[69]: Timedelta('-1 days +00:00:05')In [70]: y2.sum()
Out[70]: Timedelta('-1 days +00:00:10') 

频率转换

Timedelta Series 和 TimedeltaIndex,以及 Timedelta 可以通过转换为特定的 timedelta dtype 转换为其他频率。

In [71]: december = pd.Series(pd.date_range("20121201", periods=4))In [72]: january = pd.Series(pd.date_range("20130101", periods=4))In [73]: td = january - decemberIn [74]: td[2] += datetime.timedelta(minutes=5, seconds=3)In [75]: td[3] = np.nanIn [76]: td
Out[76]: 
0   31 days 00:00:00
1   31 days 00:00:00
2   31 days 00:05:03
3                NaT
dtype: timedelta64[ns]# to seconds
In [77]: td.astype("timedelta64[s]")
Out[77]: 
0   31 days 00:00:00
1   31 days 00:00:00
2   31 days 00:05:03
3                NaT
dtype: timedelta64[s] 

对于不支持的“s”、“ms”、“us”、“ns” 的 timedelta64 分辨率,另一种方法是除以另一个 timedelta 对象。请注意,除以 NumPy 标量是真除法,而 astyping 相当于地板除法。

# to days
In [78]: td / np.timedelta64(1, "D")
Out[78]: 
0    31.000000
1    31.000000
2    31.003507
3          NaN
dtype: float64 

timedelta64[ns] Series 除以整数或整数 Series,或乘以整数,将产生另一个 timedelta64[ns] dtypes Series。

In [79]: td * -1
Out[79]: 
0   -31 days +00:00:00
1   -31 days +00:00:00
2   -32 days +23:54:57
3                  NaT
dtype: timedelta64[ns]In [80]: td * pd.Series([1, 2, 3, 4])
Out[80]: 
0   31 days 00:00:00
1   62 days 00:00:00
2   93 days 00:15:09
3                NaT
dtype: timedelta64[ns] 

timedelta64[ns] Series 通过标量 Timedelta 进行四舍五入的除法运算将得到一个整数 Series。

In [81]: td // pd.Timedelta(days=3, hours=4)
Out[81]: 
0    9.0
1    9.0
2    9.0
3    NaN
dtype: float64In [82]: pd.Timedelta(days=3, hours=4) // td
Out[82]: 
0    0.0
1    0.0
2    0.0
3    NaN
dtype: float64 

当与另一个类似 timedelta 或数值参数进行操作时,Timedelta 定义了 mod (%) 和 divmod 操作。

In [83]: pd.Timedelta(hours=37) % datetime.timedelta(hours=2)
Out[83]: Timedelta('0 days 01:00:00')# divmod against a timedelta-like returns a pair (int, Timedelta)
In [84]: divmod(datetime.timedelta(hours=2), pd.Timedelta(minutes=11))
Out[84]: (10, Timedelta('0 days 00:10:00'))# divmod against a numeric returns a pair (Timedelta, Timedelta)
In [85]: divmod(pd.Timedelta(hours=25), 86400000000000)
Out[85]: (Timedelta('0 days 00:00:00.000000001'), Timedelta('0 days 01:00:00')) 

属性

您可以直接使用属性 days,seconds,microseconds,nanoseconds 访问 TimedeltaTimedeltaIndex 的各个组件。这些与 datetime.timedelta 返回的值相同,例如,.seconds 属性表示大于等于 0 且小于 1 天的秒数。这些根据 Timedelta 是否有符号而有符号。

这些操作也可以通过 Series.dt 属性直接访问。

注意

注意,属性不是 Timedelta 的显示值。使用 .components 检索显示值。

对于一个 Series

In [86]: td.dt.days
Out[86]: 
0    31.0
1    31.0
2    31.0
3     NaN
dtype: float64In [87]: td.dt.seconds
Out[87]: 
0      0.0
1      0.0
2    303.0
3      NaN
dtype: float64 

您可以直接访问标量 Timedelta 的字段值。

In [88]: tds = pd.Timedelta("31 days 5 min 3 sec")In [89]: tds.days
Out[89]: 31In [90]: tds.seconds
Out[90]: 303In [91]: (-tds).seconds
Out[91]: 86097 

您可以使用 .components 属性访问时间增量的缩减形式。这将返回一个类似于 Series 的索引的 DataFrame。这些是 Timedelta显示值。

In [92]: td.dt.components
Out[92]: days  hours  minutes  seconds  milliseconds  microseconds  nanoseconds
0  31.0    0.0      0.0      0.0           0.0           0.0          0.0
1  31.0    0.0      0.0      0.0           0.0           0.0          0.0
2  31.0    0.0      5.0      3.0           0.0           0.0          0.0
3   NaN    NaN      NaN      NaN           NaN           NaN          NaNIn [93]: td.dt.components.seconds
Out[93]: 
0    0.0
1    0.0
2    3.0
3    NaN
Name: seconds, dtype: float64 

您可以使用 .isoformat 方法将 Timedelta 转换为 ISO 8601 Duration 字符串。

In [94]: pd.Timedelta(....:    days=6, minutes=50, seconds=3, milliseconds=10, microseconds=10, nanoseconds=12....: ).isoformat()....: 
Out[94]: 'P6DT0H50M3.010010012S' 

TimedeltaIndex

要生成具有时间增量的索引,您可以使用TimedeltaIndextimedelta_range()构造函数。

使用TimedeltaIndex,您可以传递类似字符串的、Timedeltatimedeltanp.timedelta64对象。传递np.nan/pd.NaT/nat将表示缺失值。

In [95]: pd.TimedeltaIndex(....:    [....:        "1 days",....:        "1 days, 00:00:05",....:        np.timedelta64(2, "D"),....:        datetime.timedelta(days=2, seconds=2),....:    ]....: )....: 
Out[95]: 
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00','2 days 00:00:02'],dtype='timedelta64[ns]', freq=None) 

字符串‘infer’可以传递以设置索引的频率为创建时推断的频率:

In [96]: pd.TimedeltaIndex(["0 days", "10 days", "20 days"], freq="infer")
Out[96]: TimedeltaIndex(['0 days', '10 days', '20 days'], dtype='timedelta64[ns]', freq='10D') 

生成时间增量范围

类似于date_range(),您可以使用timedelta_range()构建TimedeltaIndex的常规范围。timedelta_range的默认频率是日历日:

In [97]: pd.timedelta_range(start="1 days", periods=5)
Out[97]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D') 

可以使用timedelta_range的各种startendperiods组合:

In [98]: pd.timedelta_range(start="1 days", end="5 days")
Out[98]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D')In [99]: pd.timedelta_range(end="10 days", periods=4)
Out[99]: TimedeltaIndex(['7 days', '8 days', '9 days', '10 days'], dtype='timedelta64[ns]', freq='D') 

freq参数可以传递各种频率别名:

In [100]: pd.timedelta_range(start="1 days", end="2 days", freq="30min")
Out[100]: 
TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00','1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00','1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00','1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00','1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00','1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00','1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00','1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00','1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00','1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00','1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00','1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00','1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00','1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00','1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00','1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00','2 days 00:00:00'],dtype='timedelta64[ns]', freq='30min')In [101]: pd.timedelta_range(start="1 days", periods=5, freq="2D5h")
Out[101]: 
TimedeltaIndex(['1 days 00:00:00', '3 days 05:00:00', '5 days 10:00:00','7 days 15:00:00', '9 days 20:00:00'],dtype='timedelta64[ns]', freq='53h') 

指定startendperiods将生成从startend的一系列均匀间隔的时间增量,包括startend,结果为TimedeltaIndex中的periods个元素:

In [102]: pd.timedelta_range("0 days", "4 days", periods=5)
Out[102]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None)In [103]: pd.timedelta_range("0 days", "4 days", periods=10)
Out[103]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 10:40:00', '0 days 21:20:00','1 days 08:00:00', '1 days 18:40:00', '2 days 05:20:00','2 days 16:00:00', '3 days 02:40:00', '3 days 13:20:00','4 days 00:00:00'],dtype='timedelta64[ns]', freq=None) 

使用TimedeltaIndex

与其他类似日期时间索引,如DatetimeIndexPeriodIndex,一样,您可以将TimedeltaIndex用作 pandas 对象的索引。

In [104]: s = pd.Series(.....:    np.arange(100),.....:    index=pd.timedelta_range("1 days", periods=100, freq="h"),.....: ).....: In [105]: s
Out[105]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
4 days 23:00:00    95
5 days 00:00:00    96
5 days 01:00:00    97
5 days 02:00:00    98
5 days 03:00:00    99
Freq: h, Length: 100, dtype: int64 

选择工作方式类似,对于类似字符串和切片的强制转换:

In [106]: s["1 day":"2 day"]
Out[106]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
2 days 19:00:00    43
2 days 20:00:00    44
2 days 21:00:00    45
2 days 22:00:00    46
2 days 23:00:00    47
Freq: h, Length: 48, dtype: int64In [107]: s["1 day 01:00:00"]
Out[107]: 1In [108]: s[pd.Timedelta("1 day 1h")]
Out[108]: 1 

此外,您可以使用部分字符串选择,范围将被推断:

In [109]: s["1 day":"1 day 5 hours"]
Out[109]: 
1 days 00:00:00    0
1 days 01:00:00    1
1 days 02:00:00    2
1 days 03:00:00    3
1 days 04:00:00    4
1 days 05:00:00    5
Freq: h, dtype: int64 

操作

最后,TimedeltaIndexDatetimeIndex的组合允许进行某些保留 NaT 的组合操作:

In [110]: tdi = pd.TimedeltaIndex(["1 days", pd.NaT, "2 days"])In [111]: tdi.to_list()
Out[111]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]In [112]: dti = pd.date_range("20130101", periods=3)In [113]: dti.to_list()
Out[113]: 
[Timestamp('2013-01-01 00:00:00'),Timestamp('2013-01-02 00:00:00'),Timestamp('2013-01-03 00:00:00')]In [114]: (dti + tdi).to_list()
Out[114]: [Timestamp('2013-01-02 00:00:00'), NaT, Timestamp('2013-01-05 00:00:00')]In [115]: (dti - tdi).to_list()
Out[115]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2013-01-01 00:00:00')] 

转换

与上面Series上的频率转换类似,您可以将这些索引转换为另一个索引。

In [116]: tdi / np.timedelta64(1, "s")
Out[116]: Index([86400.0, nan, 172800.0], dtype='float64')In [117]: tdi.astype("timedelta64[s]")
Out[117]: TimedeltaIndex(['1 days', NaT, '2 days'], dtype='timedelta64[s]', freq=None) 

标量类型操作也有效。这些可能返回不同类型的索引。

# adding or timedelta and date -> datelike
In [118]: tdi + pd.Timestamp("20130101")
Out[118]: DatetimeIndex(['2013-01-02', 'NaT', '2013-01-03'], dtype='datetime64[ns]', freq=None)# subtraction of a date and a timedelta -> datelike
# note that trying to subtract a date from a Timedelta will raise an exception
In [119]: (pd.Timestamp("20130101") - tdi).to_list()
Out[119]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2012-12-30 00:00:00')]# timedelta + timedelta -> timedelta
In [120]: tdi + pd.Timedelta("10 days")
Out[120]: TimedeltaIndex(['11 days', NaT, '12 days'], dtype='timedelta64[ns]', freq=None)# division can result in a Timedelta if the divisor is an integer
In [121]: tdi / 2
Out[121]: TimedeltaIndex(['0 days 12:00:00', NaT, '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)# or a float64 Index if the divisor is a Timedelta
In [122]: tdi / tdi[0]
Out[122]: Index([1.0, nan, 2.0], dtype='float64') 

生成时间增量范围

类似于date_range(),您可以使用timedelta_range()构建TimedeltaIndex的常规范围。timedelta_range的默认频率是日历日:

In [97]: pd.timedelta_range(start="1 days", periods=5)
Out[97]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D') 

可以使用timedelta_range的各种startendperiods组合:

In [98]: pd.timedelta_range(start="1 days", end="5 days")
Out[98]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype='timedelta64[ns]', freq='D')In [99]: pd.timedelta_range(end="10 days", periods=4)
Out[99]: TimedeltaIndex(['7 days', '8 days', '9 days', '10 days'], dtype='timedelta64[ns]', freq='D') 

freq参数可以传递各种频率别名:

In [100]: pd.timedelta_range(start="1 days", end="2 days", freq="30min")
Out[100]: 
TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00','1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00','1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00','1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00','1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00','1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00','1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00','1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00','1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00','1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00','1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00','1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00','1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00','1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00','1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00','1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00','2 days 00:00:00'],dtype='timedelta64[ns]', freq='30min')In [101]: pd.timedelta_range(start="1 days", periods=5, freq="2D5h")
Out[101]: 
TimedeltaIndex(['1 days 00:00:00', '3 days 05:00:00', '5 days 10:00:00','7 days 15:00:00', '9 days 20:00:00'],dtype='timedelta64[ns]', freq='53h') 

指定startendperiods将生成从startend的一系列均匀间隔的时间增量,包括startend,结果为TimedeltaIndex中的periods个元素:

In [102]: pd.timedelta_range("0 days", "4 days", periods=5)
Out[102]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None)In [103]: pd.timedelta_range("0 days", "4 days", periods=10)
Out[103]: 
TimedeltaIndex(['0 days 00:00:00', '0 days 10:40:00', '0 days 21:20:00','1 days 08:00:00', '1 days 18:40:00', '2 days 05:20:00','2 days 16:00:00', '3 days 02:40:00', '3 days 13:20:00','4 days 00:00:00'],dtype='timedelta64[ns]', freq=None) 

使用TimedeltaIndex

与其他类似日期时间索引,如DatetimeIndexPeriodIndex,一样,您可以将TimedeltaIndex用作 pandas 对象的索引。

In [104]: s = pd.Series(.....:    np.arange(100),.....:    index=pd.timedelta_range("1 days", periods=100, freq="h"),.....: ).....: In [105]: s
Out[105]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
4 days 23:00:00    95
5 days 00:00:00    96
5 days 01:00:00    97
5 days 02:00:00    98
5 days 03:00:00    99
Freq: h, Length: 100, dtype: int64 

选择操作类似,对于类似字符串和切片的情况会进行强制转换:

In [106]: s["1 day":"2 day"]
Out[106]: 
1 days 00:00:00     0
1 days 01:00:00     1
1 days 02:00:00     2
1 days 03:00:00     3
1 days 04:00:00     4..
2 days 19:00:00    43
2 days 20:00:00    44
2 days 21:00:00    45
2 days 22:00:00    46
2 days 23:00:00    47
Freq: h, Length: 48, dtype: int64In [107]: s["1 day 01:00:00"]
Out[107]: 1In [108]: s[pd.Timedelta("1 day 1h")]
Out[108]: 1 

此外,您可以使用部分字符串选择,范围将被推断:

In [109]: s["1 day":"1 day 5 hours"]
Out[109]: 
1 days 00:00:00    0
1 days 01:00:00    1
1 days 02:00:00    2
1 days 03:00:00    3
1 days 04:00:00    4
1 days 05:00:00    5
Freq: h, dtype: int64 

操作

最后,TimedeltaIndexDatetimeIndex 的组合允许进行某些保留 NaT 的组合操作:

In [110]: tdi = pd.TimedeltaIndex(["1 days", pd.NaT, "2 days"])In [111]: tdi.to_list()
Out[111]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]In [112]: dti = pd.date_range("20130101", periods=3)In [113]: dti.to_list()
Out[113]: 
[Timestamp('2013-01-01 00:00:00'),Timestamp('2013-01-02 00:00:00'),Timestamp('2013-01-03 00:00:00')]In [114]: (dti + tdi).to_list()
Out[114]: [Timestamp('2013-01-02 00:00:00'), NaT, Timestamp('2013-01-05 00:00:00')]In [115]: (dti - tdi).to_list()
Out[115]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2013-01-01 00:00:00')] 

转换

类似于上面 Series 上的频率转换,您可以将这些索引转换为另一个索引。

In [116]: tdi / np.timedelta64(1, "s")
Out[116]: Index([86400.0, nan, 172800.0], dtype='float64')In [117]: tdi.astype("timedelta64[s]")
Out[117]: TimedeltaIndex(['1 days', NaT, '2 days'], dtype='timedelta64[s]', freq=None) 

标量类型的操作也可以正常工作。这些操作可能返回一个不同类型的索引。

# adding or timedelta and date -> datelike
In [118]: tdi + pd.Timestamp("20130101")
Out[118]: DatetimeIndex(['2013-01-02', 'NaT', '2013-01-03'], dtype='datetime64[ns]', freq=None)# subtraction of a date and a timedelta -> datelike
# note that trying to subtract a date from a Timedelta will raise an exception
In [119]: (pd.Timestamp("20130101") - tdi).to_list()
Out[119]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2012-12-30 00:00:00')]# timedelta + timedelta -> timedelta
In [120]: tdi + pd.Timedelta("10 days")
Out[120]: TimedeltaIndex(['11 days', NaT, '12 days'], dtype='timedelta64[ns]', freq=None)# division can result in a Timedelta if the divisor is an integer
In [121]: tdi / 2
Out[121]: TimedeltaIndex(['0 days 12:00:00', NaT, '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)# or a float64 Index if the divisor is a Timedelta
In [122]: tdi / tdi[0]
Out[122]: Index([1.0, nan, 2.0], dtype='float64') 

重采样

类似于时间序列重采样,我们可以使用 TimedeltaIndex 进行重采样。

In [123]: s.resample("D").mean()
Out[123]: 
1 days    11.5
2 days    35.5
3 days    59.5
4 days    83.5
5 days    97.5
Freq: D, dtype: float64 

选项和设置

原文:pandas.pydata.org/docs/user_guide/options.html

概览

pandas 有一个选项 API,可以配置和自定义与 DataFrame 显示、数据行为等全局行为相关的行为。

选项具有完整的“点样式”、不区分大小写的名称(例如 display.max_rows)。您可以直接将选项设置/获取为顶级 options 属性的属性:

In [1]: import pandas as pdIn [2]: pd.options.display.max_rows
Out[2]: 15In [3]: pd.options.display.max_rows = 999In [4]: pd.options.display.max_rows
Out[4]: 999 

API 由 5 个相关函数组成,可直接从 pandas 命名空间中使用:

  • get_option() / set_option() - 获取/设置单个选项的值。

  • reset_option() - 将一个或多个选项重置为它们的默认值。

  • describe_option() - 打印一个或多个选项的描述。

  • option_context() - 在执行后恢复到先前设置的一组选项的代码块。

注意

开发者可以查看 pandas/core/config_init.py 获取更多信息。

上述所有函数都接受正则表达式模式(re.search 样式)作为参数,以匹配一个明确的子字符串:

In [5]: pd.get_option("display.chop_threshold")In [6]: pd.set_option("display.chop_threshold", 2)In [7]: pd.get_option("display.chop_threshold")
Out[7]: 2In [8]: pd.set_option("chop", 4)In [9]: pd.get_option("display.chop_threshold")
Out[9]: 4 

以下内容 不会生效,因为它匹配多个选项名称,例如 display.max_colwidthdisplay.max_rowsdisplay.max_columns

In [10]: pd.get_option("max")
---------------------------------------------------------------------------
OptionError  Traceback (most recent call last)
Cell In[10], line 1
----> 1 pd.get_option("max")File ~/work/pandas/pandas/pandas/_config/config.py:274, in CallableDynamicDoc.__call__(self, *args, **kwds)273 def __call__(self, *args, **kwds) -> T:
--> 274     return self.__func__(*args, **kwds)File ~/work/pandas/pandas/pandas/_config/config.py:146, in _get_option(pat, silent)145 def _get_option(pat: str, silent: bool = False) -> Any:
--> 146     key = _get_single_key(pat, silent)148     # walk the nested dict149     root, k = _get_root(key)File ~/work/pandas/pandas/pandas/_config/config.py:134, in _get_single_key(pat, silent)132     raise OptionError(f"No such keys(s): {repr(pat)}")133 if len(keys) > 1:
--> 134     raise OptionError("Pattern matched multiple keys")135 key = keys[0]137 if not silent:OptionError: Pattern matched multiple keys 

警告

如果将此形式的速记用法用于未来版本中添加了类似名称的新选项,可能会导致您的代码中断。

可用选项

您可以使用 describe_option() 获取可用选项及其描述的列表。当没有参数调用 describe_option() 时,将打印出所有可用选项的描述。

In [11]: pd.describe_option()
compute.use_bottleneck : boolUse the bottleneck library to accelerate if it is installed,the default is TrueValid values: False,True[default: True] [currently: True]
compute.use_numba : boolUse the numba engine option for select operations if it is installed,the default is FalseValid values: False,True[default: False] [currently: False]
compute.use_numexpr : boolUse the numexpr library to accelerate computation if it is installed,the default is TrueValid values: False,True[default: True] [currently: True]
display.chop_threshold : float or Noneif set to a float value, all float values smaller than the given thresholdwill be displayed as exactly 0 by repr and friends.[default: None] [currently: None]
display.colheader_justify : 'left'/'right'Controls the justification of column headers. used by DataFrameFormatter.[default: right] [currently: right]
display.date_dayfirst : booleanWhen True, prints and parses dates with the day first, eg 20/01/2005[default: False] [currently: False]
display.date_yearfirst : booleanWhen True, prints and parses dates with the year first, eg 2005/01/20[default: False] [currently: False]
display.encoding : str/unicodeDefaults to the detected encoding of the console.Specifies the encoding to be used for strings returned by to_string,these are generally strings meant to be displayed on the console.[default: utf-8] [currently: utf8]
display.expand_frame_repr : booleanWhether to print out the full DataFrame repr for wide DataFrames acrossmultiple lines, `max_columns` is still respected, but the output willwrap-around across multiple "pages" if its width exceeds `display.width`.[default: True] [currently: True]
display.float_format : callableThe callable should accept a floating point number and returna string with the desired format of the number. This is usedin some places like SeriesFormatter.See formats.format.EngFormatter for an example.[default: None] [currently: None]
display.html.border : intA ``border=value`` attribute is inserted in the ``<table>`` tagfor the DataFrame HTML repr.[default: 1] [currently: 1]
display.html.table_schema : booleanWhether to publish a Table Schema representation for frontendsthat support it.(default: False)[default: False] [currently: False]
display.html.use_mathjax : booleanWhen True, Jupyter notebook will process table contents using MathJax,rendering mathematical expressions enclosed by the dollar symbol.(default: True)[default: True] [currently: True]
display.large_repr : 'truncate'/'info'For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) canshow a truncated table, or switch to the view fromdf.info() (the behaviour in earlier versions of pandas).[default: truncate] [currently: truncate]
display.max_categories : intThis sets the maximum number of categories pandas should output whenprinting out a `Categorical` or a Series of dtype "category".[default: 8] [currently: 8]
display.max_columns : intIf max_cols is exceeded, switch to truncate view. Depending on`large_repr`, objects are either centrally truncated or printed asa summary view. 'None' value means unlimited.In case python/IPython is running in a terminal and `large_repr`equals 'truncate' this can be set to 0 or None and pandas will auto-detectthe width of the terminal and print a truncated object which fitsthe screen width. The IPython notebook, IPython qtconsole, or IDLEdo not run in a terminal and hence it is not possible to docorrect auto-detection and defaults to 20.[default: 0] [currently: 0]
display.max_colwidth : int or NoneThe maximum width in characters of a column in the repr ofa pandas data structure. When the column overflows, a "..."placeholder is embedded in the output. A 'None' value means unlimited.[default: 50] [currently: 50]
display.max_dir_items : intThe number of items that will be added to `dir(...)`. 'None' value meansunlimited. Because dir is cached, changing this option will not immediatelyaffect already existing dataframes until a column is deleted or added.This is for instance used to suggest columns from a dataframe to tabcompletion.[default: 100] [currently: 100]
display.max_info_columns : intmax_info_columns is used in DataFrame.info method to decide ifper column information will be printed.[default: 100] [currently: 100]
display.max_info_rows : intdf.info() will usually show null-counts for each column.For large frames this can be quite slow. max_info_rows and max_info_colslimit this null check only to frames with smaller dimensions thanspecified.[default: 1690785] [currently: 1690785]
display.max_rows : intIf max_rows is exceeded, switch to truncate view. Depending on`large_repr`, objects are either centrally truncated or printed asa summary view. 'None' value means unlimited.In case python/IPython is running in a terminal and `large_repr`equals 'truncate' this can be set to 0 and pandas will auto-detectthe height of the terminal and print a truncated object which fitsthe screen height. The IPython notebook, IPython qtconsole, orIDLE do not run in a terminal and hence it is not possible to docorrect auto-detection.[default: 60] [currently: 60]
display.max_seq_items : int or NoneWhen pretty-printing a long sequence, no more then `max_seq_items`will be printed. If items are omitted, they will be denoted by theaddition of "..." to the resulting string.If set to None, the number of items to be printed is unlimited.[default: 100] [currently: 100]
display.memory_usage : bool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed whendf.info() is called. Valid values True,False,'deep'[default: True] [currently: True]
display.min_rows : intThe numbers of rows to show in a truncated view (when `max_rows` isexceeded). Ignored when `max_rows` is set to None or 0\. When set toNone, follows the value of `max_rows`.[default: 10] [currently: 10]
display.multi_sparse : boolean"sparsify" MultiIndex display (don't display repeatedelements in outer levels within groups)[default: True] [currently: True]
display.notebook_repr_html : booleanWhen True, IPython notebook will use html representation forpandas objects (if it is available).[default: True] [currently: True]
display.pprint_nest_depth : intControls the number of nested levels to process when pretty-printing[default: 3] [currently: 3]
display.precision : intFloating point output precision in terms of number of places after thedecimal, for regular formatting as well as scientific notation. Similarto ``precision`` in :meth:`numpy.set_printoptions`.[default: 6] [currently: 6]
display.show_dimensions : boolean or 'truncate'Whether to print out dimensions at the end of DataFrame repr.If 'truncate' is specified, only print out the dimensions if theframe is truncated (e.g. not display all rows and/or columns)[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide : booleanWhether to use the Unicode East Asian Width to calculate the display textwidth.Enabling this may affect to the performance (default: False)[default: False] [currently: False]
display.unicode.east_asian_width : booleanWhether to use the Unicode East Asian Width to calculate the display textwidth.Enabling this may affect to the performance (default: False)[default: False] [currently: False]
display.width : intWidth of the display in characters. In case python/IPython is running ina terminal this can be set to None and pandas will correctly auto-detectthe width.Note that the IPython notebook, IPython qtconsole, or IDLE do not run in aterminal and hence it is not possible to correctly detect the width.[default: 80] [currently: 80]
future.infer_string Whether to infer sequence of str objects as pyarrow string dtype, which will be the default in pandas 3.0 (at which point this option will be deprecated).[default: False] [currently: False]
future.no_silent_downcasting Whether to opt-in to the future behavior which will *not* silently downcast results from Series and DataFrame `where`, `mask`, and `clip` methods. Silent downcasting will be removed in pandas 3.0 (at which point this option will be deprecated).[default: False] [currently: False]
io.excel.ods.reader : stringThe default Excel reader engine for 'ods' files. Available options:auto, odf, calamine.[default: auto] [currently: auto]
io.excel.ods.writer : stringThe default Excel writer engine for 'ods' files. Available options:auto, odf.[default: auto] [currently: auto]
io.excel.xls.reader : stringThe default Excel reader engine for 'xls' files. Available options:auto, xlrd, calamine.[default: auto] [currently: auto]
io.excel.xlsb.reader : stringThe default Excel reader engine for 'xlsb' files. Available options:auto, pyxlsb, calamine.[default: auto] [currently: auto]
io.excel.xlsm.reader : stringThe default Excel reader engine for 'xlsm' files. Available options:auto, xlrd, openpyxl, calamine.[default: auto] [currently: auto]
io.excel.xlsm.writer : stringThe default Excel writer engine for 'xlsm' files. Available options:auto, openpyxl.[default: auto] [currently: auto]
io.excel.xlsx.reader : stringThe default Excel reader engine for 'xlsx' files. Available options:auto, xlrd, openpyxl, calamine.[default: auto] [currently: auto]
io.excel.xlsx.writer : stringThe default Excel writer engine for 'xlsx' files. Available options:auto, openpyxl, xlsxwriter.[default: auto] [currently: auto]
io.hdf.default_format : formatdefault format writing format, if None, thenput will default to 'fixed' and append will default to 'table'[default: None] [currently: None]
io.hdf.dropna_table : booleandrop ALL nan rows when appending to a table[default: False] [currently: False]
io.parquet.engine : stringThe default parquet reader/writer engine. Available options:'auto', 'pyarrow', 'fastparquet', the default is 'auto'[default: auto] [currently: auto]
io.sql.engine : stringThe default sql reader/writer engine. Available options:'auto', 'sqlalchemy', the default is 'auto'[default: auto] [currently: auto]
mode.chained_assignment : stringRaise an exception, warn, or no action if trying to use chained assignment,The default is warn[default: warn] [currently: warn]
mode.copy_on_write : boolUse new copy-view behaviour using Copy-on-Write. Defaults to False,unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable(if set to "1" for True, needs to be set before pandas is imported).[default: False] [currently: False]
mode.data_manager : stringInternal data manager type; can be "block" or "array". Defaults to "block",unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needsto be set before pandas is imported).[default: block] [currently: block](Deprecated, use `` instead.)
mode.sim_interactive : booleanWhether to simulate interactive mode for purposes of testing[default: False] [currently: False]
mode.string_storage : stringThe default storage for StringDtype. This option is ignored if``future.infer_string`` is set to True.[default: python] [currently: python]
mode.use_inf_as_na : booleanTrue means treat None, NaN, INF, -INF as NA (old way),False means None and NaN are null, but INF, -INF are not NA(new way).This option is deprecated in pandas 2.1.0 and will be removed in 3.0.[default: False] [currently: False](Deprecated, use `` instead.)
plotting.backend : strThe plotting backend to use. The default value is "matplotlib", thebackend provided with pandas. Other backends can be specified byproviding the name of the module that implements the backend.[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_converters : bool or 'auto'.Whether to register converters with matplotlib's units registry fordates, times, datetimes, and Periods. Toggling to False will removethe converters, restoring any converters that pandas overwrote.[default: auto] [currently: auto]
styler.format.decimal : strThe character representation for the decimal separator for floats and complex.[default: .] [currently: .]
styler.format.escape : str, optionalWhether to escape certain characters according to the given context; html or latex.[default: None] [currently: None]
styler.format.formatter : str, callable, dict, optionalA formatter object to be used as default within ``Styler.format``.[default: None] [currently: None]
styler.format.na_rep : str, optionalThe string representation for values identified as missing.[default: None] [currently: None]
styler.format.precision : intThe precision for floats and complex numbers.[default: 6] [currently: 6]
styler.format.thousands : str, optionalThe character representation for thousands separator for floats, int and complex.[default: None] [currently: None]
styler.html.mathjax : boolIf False will render special CSS classes to table attributes that indicate Mathjaxwill not be used in Jupyter Notebook.[default: True] [currently: True]
styler.latex.environment : strThe environment to replace ``\begin{table}``. If "longtable" is used resultsin a specific longtable environment format.[default: None] [currently: None]
styler.latex.hrules : boolWhether to add horizontal rules on top and bottom and below the headers.[default: False] [currently: False]
styler.latex.multicol_align : {"r", "c", "l", "naive-l", "naive-r"}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipedecorators can also be added to non-naive values to draw verticalrules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells.[default: r] [currently: r]
styler.latex.multirow_align : {"c", "t", "b"}The specifier for vertical alignment of sparsified LaTeX multirows.[default: c] [currently: c]
styler.render.encoding : strThe encoding used for output HTML and LaTeX files.[default: utf-8] [currently: utf-8]
styler.render.max_columns : int, optionalThe maximum number of columns that will be rendered. May still be reduced tosatisfy ``max_elements``, which takes precedence.[default: None] [currently: None]
styler.render.max_elements : intThe maximum number of data-cell (<td>) elements that will be rendered beforetrimming will occur over columns, rows or both if needed.[default: 262144] [currently: 262144]
styler.render.max_rows : int, optionalThe maximum number of rows that will be rendered. May still be reduced tosatisfy ``max_elements``, which takes precedence.[default: None] [currently: None]
styler.render.repr : strDetermine which output to use in Jupyter Notebook in {"html", "latex"}.[default: html] [currently: html]
styler.sparse.columns : boolWhether to sparsify the display of hierarchical columns. Setting to False willdisplay each explicit level element in a hierarchical key for each column.[default: True] [currently: True]
styler.sparse.index : boolWhether to sparsify the display of a hierarchical index. Setting to False willdisplay each explicit level element in a hierarchical key for each row.[default: True] [currently: True] 

获取和设置选项

如上所述,get_option()set_option() 可从 pandas 命名空间中调用。要更改选项,请调用 set_option('选项正则表达式', 新值)

In [12]: pd.get_option("mode.sim_interactive")
Out[12]: FalseIn [13]: pd.set_option("mode.sim_interactive", True)In [14]: pd.get_option("mode.sim_interactive")
Out[14]: True 

注意

选项 'mode.sim_interactive' 主要用于调试目的。

您可以使用 reset_option() 将设置恢复为默认值

In [15]: pd.get_option("display.max_rows")
Out[15]: 60In [16]: pd.set_option("display.max_rows", 999)In [17]: pd.get_option("display.max_rows")
Out[17]: 999In [18]: pd.reset_option("display.max_rows")In [19]: pd.get_option("display.max_rows")
Out[19]: 60 

还可以一次性重置多个选项(使用正则表达式):

In [20]: pd.reset_option("^display") 

option_context() 上下文管理器已通过顶级 API 暴露,允许您使用给定的选项值执行代码。退出with块时,选项值会自动恢复:

In [21]: with pd.option_context("display.max_rows", 10, "display.max_columns", 5):....:    print(pd.get_option("display.max_rows"))....:    print(pd.get_option("display.max_columns"))....: 
10
5In [22]: print(pd.get_option("display.max_rows"))
60In [23]: print(pd.get_option("display.max_columns"))
0 

在 Python/IPython 环境中设置启动选项

使用 Python/IPython 环境的启动脚本导入 pandas 并设置选项可以使与 pandas 的工作更高效。为此,在所需配置文件的启动目录中创建一个.py.ipy脚本。在默认 IPython 配置文件中找到启动文件夹的示例:

$IPYTHONDIR/profile_default/startup 

更多信息可以在IPython 文档中找到。下面显示了 pandas 的示例启动脚本:

import pandas as pdpd.set_option("display.max_rows", 999)
pd.set_option("display.precision", 5) 

经常使用的选项

以下演示了更常用的显示选项。

display.max_rowsdisplay.max_columns 设置在打印漂亮的帧时显示的最大行数和列数。被截断的行用省略号替换。

In [24]: df = pd.DataFrame(np.random.randn(7, 2))In [25]: pd.set_option("display.max_rows", 7)In [26]: df
Out[26]: 0         1
0  0.469112 -0.282863
1 -1.509059 -1.135632
2  1.212112 -0.173215
3  0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929  1.071804
6  0.721555 -0.706771In [27]: pd.set_option("display.max_rows", 5)In [28]: df
Out[28]: 0         1
0   0.469112 -0.282863
1  -1.509059 -1.135632
..       ...       ...
5  -0.494929  1.071804
6   0.721555 -0.706771[7 rows x 2 columns]In [29]: pd.reset_option("display.max_rows") 

一旦超过display.max_rowsdisplay.min_rows 选项确定在截断的 repr 中显示多少行。

In [30]: pd.set_option("display.max_rows", 8)In [31]: pd.set_option("display.min_rows", 4)# below max_rows -> all rows shown
In [32]: df = pd.DataFrame(np.random.randn(7, 2))In [33]: df
Out[33]: 0         1
0 -1.039575  0.271860
1 -0.424972  0.567020
2  0.276232 -1.087401
3 -0.673690  0.113648
4 -1.478427  0.524988
5  0.404705  0.577046
6 -1.715002 -1.039268# above max_rows -> only min_rows (4) rows shown
In [34]: df = pd.DataFrame(np.random.randn(9, 2))In [35]: df
Out[35]: 0         1
0  -0.370647 -1.157892
1  -1.344312  0.844885
..       ...       ...
7   0.276662 -0.472035
8  -0.013960 -0.362543[9 rows x 2 columns]In [36]: pd.reset_option("display.max_rows")In [37]: pd.reset_option("display.min_rows") 

display.expand_frame_repr 允许DataFrame的表示跨越页面,覆盖所有列。

In [38]: df = pd.DataFrame(np.random.randn(5, 10))In [39]: pd.set_option("expand_frame_repr", True)In [40]: df
Out[40]: 0         1         2  ...         7         8         9
0 -0.006154 -0.923061  0.895717  ...  1.340309 -1.170299 -0.226169
1  0.410835  0.813850  0.132003  ... -1.436737 -1.413681  1.607920
2  1.024180  0.569605  0.875906  ... -0.078638  0.545952 -1.219217
3 -1.226825  0.769804 -1.281247  ...  0.341734  0.959726 -1.110336
4 -0.619976  0.149748 -0.732339  ...  0.301624 -2.179861 -1.369849[5 rows x 10 columns]In [41]: pd.set_option("expand_frame_repr", False)In [42]: df
Out[42]: 0         1         2         3         4         5         6         7         8         9
0 -0.006154 -0.923061  0.895717  0.805244 -1.206412  2.565646  1.431256  1.340309 -1.170299 -0.226169
1  0.410835  0.813850  0.132003 -0.827317 -0.076467 -1.187678  1.130127 -1.436737 -1.413681  1.607920
2  1.024180  0.569605  0.875906 -2.211372  0.974466 -2.006747 -0.410001 -0.078638  0.545952 -1.219217
3 -1.226825  0.769804 -1.281247 -0.727707 -0.121306 -0.097883  0.695775  0.341734  0.959726 -1.110336
4 -0.619976  0.149748 -0.732339  0.687738  0.176444  0.403310 -0.154951  0.301624 -2.179861 -1.369849In [43]: pd.reset_option("expand_frame_repr") 

display.large_repr 显示超过max_columnsmax_rowsDataFrame为截断帧或摘要。

In [44]: df = pd.DataFrame(np.random.randn(10, 10))In [45]: pd.set_option("display.max_rows", 5)In [46]: pd.set_option("large_repr", "truncate")In [47]: df
Out[47]: 0         1         2  ...         7         8         9
0  -0.954208  1.462696 -1.743161  ...  0.995761  2.396780  0.014871
1   3.357427 -0.317441 -1.236269  ...  0.380396  0.084844  0.432390
..       ...       ...       ...  ...       ...       ...       ...
8  -0.303421 -0.858447  0.306996  ...  0.476720  0.473424 -0.242861
9  -0.014805 -0.284319  0.650776  ...  1.613616  0.464000  0.227371[10 rows x 10 columns]In [48]: pd.set_option("large_repr", "info")In [49]: df
Out[49]: 
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       10 non-null     float641   1       10 non-null     float642   2       10 non-null     float643   3       10 non-null     float644   4       10 non-null     float645   5       10 non-null     float646   6       10 non-null     float647   7       10 non-null     float648   8       10 non-null     float649   9       10 non-null     float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [50]: pd.reset_option("large_repr")In [51]: pd.reset_option("display.max_rows") 

display.max_colwidth 设置列的最大宽度。长度超过此长度的单元格将被截断为省略号。

In [52]: df = pd.DataFrame(....:    np.array(....:        [....:            ["foo", "bar", "bim", "uncomfortably long string"],....:            ["horse", "cow", "banana", "apple"],....:        ]....:    )....: )....: In [53]: pd.set_option("max_colwidth", 40)In [54]: df
Out[54]: 0    1       2                          3
0    foo  bar     bim  uncomfortably long string
1  horse  cow  banana                      appleIn [55]: pd.set_option("max_colwidth", 6)In [56]: df
Out[56]: 0    1      2      3
0    foo  bar    bim  un...
1  horse  cow  ba...  appleIn [57]: pd.reset_option("max_colwidth") 

display.max_info_columns 设置调用info()时显示的列数阈值。

In [58]: df = pd.DataFrame(np.random.randn(10, 10))In [59]: pd.set_option("max_info_columns", 11)In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       10 non-null     float641   1       10 non-null     float642   2       10 non-null     float643   3       10 non-null     float644   4       10 non-null     float645   5       10 non-null     float646   6       10 non-null     float647   7       10 non-null     float648   8       10 non-null     float649   9       10 non-null     float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [61]: pd.set_option("max_info_columns", 5)In [62]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 928.0 bytesIn [63]: pd.reset_option("max_info_columns") 

display.max_info_rowsinfo()通常会显示每列的空值计数。对于大型DataFrame,这可能会相当慢。max_info_rowsmax_info_cols 分别限制了此空值检查的行数和列数。info()的关键字参数show_counts=True将覆盖此设置。

In [64]: df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10)))In [65]: df
Out[65]: 0    1    2    3    4    5    6    7    8    9
0  0.0  NaN  1.0  NaN  NaN  0.0  NaN  0.0  NaN  1.0
1  1.0  NaN  1.0  1.0  1.0  1.0  NaN  0.0  0.0  NaN
2  0.0  NaN  1.0  0.0  0.0  NaN  NaN  NaN  NaN  0.0
3  NaN  NaN  NaN  0.0  1.0  1.0  NaN  1.0  NaN  1.0
4  0.0  NaN  NaN  NaN  0.0  NaN  NaN  NaN  1.0  0.0
5  0.0  1.0  1.0  1.0  1.0  0.0  NaN  NaN  1.0  0.0
6  1.0  1.0  1.0  NaN  1.0  NaN  1.0  0.0  NaN  NaN
7  0.0  0.0  1.0  0.0  1.0  0.0  1.0  1.0  0.0  NaN
8  NaN  NaN  NaN  0.0  NaN  NaN  NaN  NaN  1.0  NaN
9  0.0  NaN  0.0  NaN  NaN  0.0  NaN  1.0  1.0  0.0In [66]: pd.set_option("max_info_rows", 11)In [67]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       8 non-null      float641   1       3 non-null      float642   2       7 non-null      float643   3       6 non-null      float644   4       7 non-null      float645   5       6 non-null      float646   6       2 non-null      float647   7       6 non-null      float648   8       6 non-null      float649   9       6 non-null      float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [68]: pd.set_option("max_info_rows", 5)In [69]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Dtype 
---  ------  ----- 0   0       float641   1       float642   2       float643   3       float644   4       float645   5       float646   6       float647   7       float648   8       float649   9       float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [70]: pd.reset_option("max_info_rows") 

display.precision 设置输出显示精度,以小数位数表示。

In [71]: df = pd.DataFrame(np.random.randn(5, 5))In [72]: pd.set_option("display.precision", 7)In [73]: df
Out[73]: 0          1          2          3          4
0 -1.1506406 -0.7983341 -0.5576966  0.3813531  1.3371217
1 -1.5310949  1.3314582 -0.5713290 -0.0266708 -1.0856630
2 -1.1147378 -0.0582158 -0.4867681  1.6851483  0.1125723
3 -1.4953086  0.8984347 -0.1482168 -1.5960698  0.1596530
4  0.2621358  0.0362196  0.1847350 -0.2550694 -0.2710197In [74]: pd.set_option("display.precision", 4)In [75]: df
Out[75]: 0       1       2       3       4
0 -1.1506 -0.7983 -0.5577  0.3814  1.3371
1 -1.5311  1.3315 -0.5713 -0.0267 -1.0857
2 -1.1147 -0.0582 -0.4868  1.6851  0.1126
3 -1.4953  0.8984 -0.1482 -1.5961  0.1597
4  0.2621  0.0362  0.1847 -0.2551 -0.2710 

display.chop_threshold 设置显示SeriesDataFrame时的舍入阈值为零。此设置不会改变存储数字的精度。

In [76]: df = pd.DataFrame(np.random.randn(6, 6))In [77]: pd.set_option("chop_threshold", 0)In [78]: df
Out[78]: 0       1       2       3       4       5
0  1.2884  0.2946 -1.1658  0.8470 -0.6856  0.6091
1 -0.3040  0.6256 -0.0593  0.2497  1.1039 -1.0875
2  1.9980 -0.2445  0.1362  0.8863 -1.3507 -0.8863
3 -1.0133  1.9209 -0.3882 -2.3144  0.6655  0.4026
4  0.3996 -1.7660  0.8504  0.3881  0.9923  0.7441
5 -0.7398 -1.0549 -0.1796  0.6396  1.5850  1.9067In [79]: pd.set_option("chop_threshold", 0.5)In [80]: df
Out[80]: 0       1       2       3       4       5
0  1.2884  0.0000 -1.1658  0.8470 -0.6856  0.6091
1  0.0000  0.6256  0.0000  0.0000  1.1039 -1.0875
2  1.9980  0.0000  0.0000  0.8863 -1.3507 -0.8863
3 -1.0133  1.9209  0.0000 -2.3144  0.6655  0.0000
4  0.0000 -1.7660  0.8504  0.0000  0.9923  0.7441
5 -0.7398 -1.0549  0.0000  0.6396  1.5850  1.9067In [81]: pd.reset_option("chop_threshold") 

display.colheader_justify控制标题的对齐方式。选项为'right''left'

In [82]: df = pd.DataFrame(....:    np.array([np.random.randn(6), np.random.randint(1, 9, 6) * 0.1, np.zeros(6)]).T,....:    columns=["A", "B", "C"],....:    dtype="float",....: )....: In [83]: pd.set_option("colheader_justify", "right")In [84]: df
Out[84]: A    B    C
0  0.1040  0.1  0.0
1  0.1741  0.5  0.0
2 -0.4395  0.4  0.0
3 -0.7413  0.8  0.0
4 -0.0797  0.4  0.0
5 -0.9229  0.3  0.0In [85]: pd.set_option("colheader_justify", "left")In [86]: df
Out[86]: A       B    C 
0  0.1040  0.1  0.0
1  0.1741  0.5  0.0
2 -0.4395  0.4  0.0
3 -0.7413  0.8  0.0
4 -0.0797  0.4  0.0
5 -0.9229  0.3  0.0In [87]: pd.reset_option("colheader_justify") 
```## 数字格式化pandas 还允许您设置控制数字在控制台中的显示方式。此选项不通过`set_options` API 设置。使用`set_eng_float_format`函数来更改 pandas 对象的浮点格式,以产生特定的格式。```py
In [88]: import numpy as npIn [89]: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True)In [90]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])In [91]: s / 1.0e3
Out[91]: 
a    303.638u
b   -721.084u
c   -622.696u
d    648.250u
e     -1.945m
dtype: float64In [92]: s / 1.0e6
Out[92]: 
a    303.638n
b   -721.084n
c   -622.696n
d    648.250n
e     -1.945u
dtype: float64 

使用round()来专门控制单个DataFrame的四舍五入 ## Unicode 格式化

警告

启用此选项会影响 DataFrame 和 Series 的打印性能(大约慢 2 倍)。仅在实际需要时使用。

一些东亚国家使用 Unicode 字符,其宽度相当于两个拉丁字符。如果 DataFrame 或 Series 包含这些字符,则默认输出模式可能无法正确对齐它们。

In [93]: df = pd.DataFrame({"国籍": ["UK", "日本"], "名前": ["Alice", "しのぶ"]})In [94]: df
Out[94]: 国籍     名前
0  UK  Alice
1  日本    しのぶ 

启用display.unicode.east_asian_width允许 pandas 检查每个字符的“东亚宽度”属性。通过将此选项设置为True,可以正确对齐这些字符。但是,这将导致比标准len函数更长的渲染时间。

In [95]: pd.set_option("display.unicode.east_asian_width", True)In [96]: df
Out[96]: 国籍    名前
0    UK   Alice
1  日本  しのぶ 

此外,Unicode 字符的宽度“模棱两可”,取决于终端设置或编码,可以是 1 或 2 个字符宽。选项display.unicode.ambiguous_as_wide可用于处理模糊性。

默认情况下,“模棱两可”的字符宽度,例如下面的“¡”(倒置感叹号),被认为是 1。

In [97]: df = pd.DataFrame({"a": ["xxx", "¡¡"], "b": ["yyy", "¡¡"]})In [98]: df
Out[98]: a    b
0  xxx  yyy
1   ¡¡   ¡¡ 

启用display.unicode.ambiguous_as_wide使 pandas 将这些字符的宽度解释为 2。(请注意,仅当启用display.unicode.east_asian_width时,此选项才会生效。)

但是,如果错误地为您的终端设置此选项,这些字符将被错误地对齐:

In [99]: pd.set_option("display.unicode.ambiguous_as_wide", True)In [100]: df
Out[100]: a     b
0   xxx   yyy
1  ¡¡  ¡¡ 
```## 表模式显示默认情况下,`DataFrame`和`Series`将以表模式表示。可以使用`display.html.table_schema`选项在全局范围内启用此功能:```py
In [101]: pd.set_option("display.html.table_schema", True) 

只有'display.max_rows'被序列化和发布。

概述

pandas 具有选项 API,可配置和自定义与DataFrame显示、数据行为等相关的全局行为。

选项具有完整的“点格式”,不区分大小写的名称(例如display.max_rows)。您可以直接作为顶级options属性的属性获取/设置选项:

In [1]: import pandas as pdIn [2]: pd.options.display.max_rows
Out[2]: 15In [3]: pd.options.display.max_rows = 999In [4]: pd.options.display.max_rows
Out[4]: 999 

该 API 由 5 个相关函数组成,可直接从pandas命名空间中获取:

  • get_option() / set_option() - 获取/设置单个选项的值。

  • reset_option() - 将一个或多个选项重置为其默认值。

  • describe_option() - 打印一个或多个选项的描述。

  • option_context() - 使用一组选项执行代码块,在执行后恢复到先前的设置。

注意

开发人员可以查看 pandas/core/config_init.py 获取更多信息。

上述所有函数都接受正则表达式模式(re.search 样式)作为参数,以匹配一个明确的子字符串:

In [5]: pd.get_option("display.chop_threshold")In [6]: pd.set_option("display.chop_threshold", 2)In [7]: pd.get_option("display.chop_threshold")
Out[7]: 2In [8]: pd.set_option("chop", 4)In [9]: pd.get_option("display.chop_threshold")
Out[9]: 4 

以下内容无效,因为它匹配多个选项名称,例如display.max_colwidthdisplay.max_rowsdisplay.max_columns

In [10]: pd.get_option("max")
---------------------------------------------------------------------------
OptionError  Traceback (most recent call last)
Cell In[10], line 1
----> 1 pd.get_option("max")File ~/work/pandas/pandas/pandas/_config/config.py:274, in CallableDynamicDoc.__call__(self, *args, **kwds)273 def __call__(self, *args, **kwds) -> T:
--> 274     return self.__func__(*args, **kwds)File ~/work/pandas/pandas/pandas/_config/config.py:146, in _get_option(pat, silent)145 def _get_option(pat: str, silent: bool = False) -> Any:
--> 146     key = _get_single_key(pat, silent)148     # walk the nested dict149     root, k = _get_root(key)File ~/work/pandas/pandas/pandas/_config/config.py:134, in _get_single_key(pat, silent)132     raise OptionError(f"No such keys(s): {repr(pat)}")133 if len(keys) > 1:
--> 134     raise OptionError("Pattern matched multiple keys")135 key = keys[0]137 if not silent:OptionError: Pattern matched multiple keys 

警告

使用这种简写形式可能会导致您的代码在将来版本中添加类似名称的新选项时出现问题。

可用选项

您可以使用describe_option()获取可用选项及其描述。当不带参数调用describe_option()时,将打印出所有可用选项的描述。

In [11]: pd.describe_option()
compute.use_bottleneck : boolUse the bottleneck library to accelerate if it is installed,the default is TrueValid values: False,True[default: True] [currently: True]
compute.use_numba : boolUse the numba engine option for select operations if it is installed,the default is FalseValid values: False,True[default: False] [currently: False]
compute.use_numexpr : boolUse the numexpr library to accelerate computation if it is installed,the default is TrueValid values: False,True[default: True] [currently: True]
display.chop_threshold : float or Noneif set to a float value, all float values smaller than the given thresholdwill be displayed as exactly 0 by repr and friends.[default: None] [currently: None]
display.colheader_justify : 'left'/'right'Controls the justification of column headers. used by DataFrameFormatter.[default: right] [currently: right]
display.date_dayfirst : booleanWhen True, prints and parses dates with the day first, eg 20/01/2005[default: False] [currently: False]
display.date_yearfirst : booleanWhen True, prints and parses dates with the year first, eg 2005/01/20[default: False] [currently: False]
display.encoding : str/unicodeDefaults to the detected encoding of the console.Specifies the encoding to be used for strings returned by to_string,these are generally strings meant to be displayed on the console.[default: utf-8] [currently: utf8]
display.expand_frame_repr : booleanWhether to print out the full DataFrame repr for wide DataFrames acrossmultiple lines, `max_columns` is still respected, but the output willwrap-around across multiple "pages" if its width exceeds `display.width`.[default: True] [currently: True]
display.float_format : callableThe callable should accept a floating point number and returna string with the desired format of the number. This is usedin some places like SeriesFormatter.See formats.format.EngFormatter for an example.[default: None] [currently: None]
display.html.border : intA ``border=value`` attribute is inserted in the ``<table>`` tagfor the DataFrame HTML repr.[default: 1] [currently: 1]
display.html.table_schema : booleanWhether to publish a Table Schema representation for frontendsthat support it.(default: False)[default: False] [currently: False]
display.html.use_mathjax : booleanWhen True, Jupyter notebook will process table contents using MathJax,rendering mathematical expressions enclosed by the dollar symbol.(default: True)[default: True] [currently: True]
display.large_repr : 'truncate'/'info'For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) canshow a truncated table, or switch to the view fromdf.info() (the behaviour in earlier versions of pandas).[default: truncate] [currently: truncate]
display.max_categories : intThis sets the maximum number of categories pandas should output whenprinting out a `Categorical` or a Series of dtype "category".[default: 8] [currently: 8]
display.max_columns : intIf max_cols is exceeded, switch to truncate view. Depending on`large_repr`, objects are either centrally truncated or printed asa summary view. 'None' value means unlimited.In case python/IPython is running in a terminal and `large_repr`equals 'truncate' this can be set to 0 or None and pandas will auto-detectthe width of the terminal and print a truncated object which fitsthe screen width. The IPython notebook, IPython qtconsole, or IDLEdo not run in a terminal and hence it is not possible to docorrect auto-detection and defaults to 20.[default: 0] [currently: 0]
display.max_colwidth : int or NoneThe maximum width in characters of a column in the repr ofa pandas data structure. When the column overflows, a "..."placeholder is embedded in the output. A 'None' value means unlimited.[default: 50] [currently: 50]
display.max_dir_items : intThe number of items that will be added to `dir(...)`. 'None' value meansunlimited. Because dir is cached, changing this option will not immediatelyaffect already existing dataframes until a column is deleted or added.This is for instance used to suggest columns from a dataframe to tabcompletion.[default: 100] [currently: 100]
display.max_info_columns : intmax_info_columns is used in DataFrame.info method to decide ifper column information will be printed.[default: 100] [currently: 100]
display.max_info_rows : intdf.info() will usually show null-counts for each column.For large frames this can be quite slow. max_info_rows and max_info_colslimit this null check only to frames with smaller dimensions thanspecified.[default: 1690785] [currently: 1690785]
display.max_rows : intIf max_rows is exceeded, switch to truncate view. Depending on`large_repr`, objects are either centrally truncated or printed asa summary view. 'None' value means unlimited.In case python/IPython is running in a terminal and `large_repr`equals 'truncate' this can be set to 0 and pandas will auto-detectthe height of the terminal and print a truncated object which fitsthe screen height. The IPython notebook, IPython qtconsole, orIDLE do not run in a terminal and hence it is not possible to docorrect auto-detection.[default: 60] [currently: 60]
display.max_seq_items : int or NoneWhen pretty-printing a long sequence, no more then `max_seq_items`will be printed. If items are omitted, they will be denoted by theaddition of "..." to the resulting string.If set to None, the number of items to be printed is unlimited.[default: 100] [currently: 100]
display.memory_usage : bool, string or NoneThis specifies if the memory usage of a DataFrame should be displayed whendf.info() is called. Valid values True,False,'deep'[default: True] [currently: True]
display.min_rows : intThe numbers of rows to show in a truncated view (when `max_rows` isexceeded). Ignored when `max_rows` is set to None or 0\. When set toNone, follows the value of `max_rows`.[default: 10] [currently: 10]
display.multi_sparse : boolean"sparsify" MultiIndex display (don't display repeatedelements in outer levels within groups)[default: True] [currently: True]
display.notebook_repr_html : booleanWhen True, IPython notebook will use html representation forpandas objects (if it is available).[default: True] [currently: True]
display.pprint_nest_depth : intControls the number of nested levels to process when pretty-printing[default: 3] [currently: 3]
display.precision : intFloating point output precision in terms of number of places after thedecimal, for regular formatting as well as scientific notation. Similarto ``precision`` in :meth:`numpy.set_printoptions`.[default: 6] [currently: 6]
display.show_dimensions : boolean or 'truncate'Whether to print out dimensions at the end of DataFrame repr.If 'truncate' is specified, only print out the dimensions if theframe is truncated (e.g. not display all rows and/or columns)[default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide : booleanWhether to use the Unicode East Asian Width to calculate the display textwidth.Enabling this may affect to the performance (default: False)[default: False] [currently: False]
display.unicode.east_asian_width : booleanWhether to use the Unicode East Asian Width to calculate the display textwidth.Enabling this may affect to the performance (default: False)[default: False] [currently: False]
display.width : intWidth of the display in characters. In case python/IPython is running ina terminal this can be set to None and pandas will correctly auto-detectthe width.Note that the IPython notebook, IPython qtconsole, or IDLE do not run in aterminal and hence it is not possible to correctly detect the width.[default: 80] [currently: 80]
future.infer_string Whether to infer sequence of str objects as pyarrow string dtype, which will be the default in pandas 3.0 (at which point this option will be deprecated).[default: False] [currently: False]
future.no_silent_downcasting Whether to opt-in to the future behavior which will *not* silently downcast results from Series and DataFrame `where`, `mask`, and `clip` methods. Silent downcasting will be removed in pandas 3.0 (at which point this option will be deprecated).[default: False] [currently: False]
io.excel.ods.reader : stringThe default Excel reader engine for 'ods' files. Available options:auto, odf, calamine.[default: auto] [currently: auto]
io.excel.ods.writer : stringThe default Excel writer engine for 'ods' files. Available options:auto, odf.[default: auto] [currently: auto]
io.excel.xls.reader : stringThe default Excel reader engine for 'xls' files. Available options:auto, xlrd, calamine.[default: auto] [currently: auto]
io.excel.xlsb.reader : stringThe default Excel reader engine for 'xlsb' files. Available options:auto, pyxlsb, calamine.[default: auto] [currently: auto]
io.excel.xlsm.reader : stringThe default Excel reader engine for 'xlsm' files. Available options:auto, xlrd, openpyxl, calamine.[default: auto] [currently: auto]
io.excel.xlsm.writer : stringThe default Excel writer engine for 'xlsm' files. Available options:auto, openpyxl.[default: auto] [currently: auto]
io.excel.xlsx.reader : stringThe default Excel reader engine for 'xlsx' files. Available options:auto, xlrd, openpyxl, calamine.[default: auto] [currently: auto]
io.excel.xlsx.writer : stringThe default Excel writer engine for 'xlsx' files. Available options:auto, openpyxl, xlsxwriter.[default: auto] [currently: auto]
io.hdf.default_format : formatdefault format writing format, if None, thenput will default to 'fixed' and append will default to 'table'[default: None] [currently: None]
io.hdf.dropna_table : booleandrop ALL nan rows when appending to a table[default: False] [currently: False]
io.parquet.engine : stringThe default parquet reader/writer engine. Available options:'auto', 'pyarrow', 'fastparquet', the default is 'auto'[default: auto] [currently: auto]
io.sql.engine : stringThe default sql reader/writer engine. Available options:'auto', 'sqlalchemy', the default is 'auto'[default: auto] [currently: auto]
mode.chained_assignment : stringRaise an exception, warn, or no action if trying to use chained assignment,The default is warn[default: warn] [currently: warn]
mode.copy_on_write : boolUse new copy-view behaviour using Copy-on-Write. Defaults to False,unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable(if set to "1" for True, needs to be set before pandas is imported).[default: False] [currently: False]
mode.data_manager : stringInternal data manager type; can be "block" or "array". Defaults to "block",unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needsto be set before pandas is imported).[default: block] [currently: block](Deprecated, use `` instead.)
mode.sim_interactive : booleanWhether to simulate interactive mode for purposes of testing[default: False] [currently: False]
mode.string_storage : stringThe default storage for StringDtype. This option is ignored if``future.infer_string`` is set to True.[default: python] [currently: python]
mode.use_inf_as_na : booleanTrue means treat None, NaN, INF, -INF as NA (old way),False means None and NaN are null, but INF, -INF are not NA(new way).This option is deprecated in pandas 2.1.0 and will be removed in 3.0.[default: False] [currently: False](Deprecated, use `` instead.)
plotting.backend : strThe plotting backend to use. The default value is "matplotlib", thebackend provided with pandas. Other backends can be specified byproviding the name of the module that implements the backend.[default: matplotlib] [currently: matplotlib]
plotting.matplotlib.register_converters : bool or 'auto'.Whether to register converters with matplotlib's units registry fordates, times, datetimes, and Periods. Toggling to False will removethe converters, restoring any converters that pandas overwrote.[default: auto] [currently: auto]
styler.format.decimal : strThe character representation for the decimal separator for floats and complex.[default: .] [currently: .]
styler.format.escape : str, optionalWhether to escape certain characters according to the given context; html or latex.[default: None] [currently: None]
styler.format.formatter : str, callable, dict, optionalA formatter object to be used as default within ``Styler.format``.[default: None] [currently: None]
styler.format.na_rep : str, optionalThe string representation for values identified as missing.[default: None] [currently: None]
styler.format.precision : intThe precision for floats and complex numbers.[default: 6] [currently: 6]
styler.format.thousands : str, optionalThe character representation for thousands separator for floats, int and complex.[default: None] [currently: None]
styler.html.mathjax : boolIf False will render special CSS classes to table attributes that indicate Mathjaxwill not be used in Jupyter Notebook.[default: True] [currently: True]
styler.latex.environment : strThe environment to replace ``\begin{table}``. If "longtable" is used resultsin a specific longtable environment format.[default: None] [currently: None]
styler.latex.hrules : boolWhether to add horizontal rules on top and bottom and below the headers.[default: False] [currently: False]
styler.latex.multicol_align : {"r", "c", "l", "naive-l", "naive-r"}The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipedecorators can also be added to non-naive values to draw verticalrules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells.[default: r] [currently: r]
styler.latex.multirow_align : {"c", "t", "b"}The specifier for vertical alignment of sparsified LaTeX multirows.[default: c] [currently: c]
styler.render.encoding : strThe encoding used for output HTML and LaTeX files.[default: utf-8] [currently: utf-8]
styler.render.max_columns : int, optionalThe maximum number of columns that will be rendered. May still be reduced tosatisfy ``max_elements``, which takes precedence.[default: None] [currently: None]
styler.render.max_elements : intThe maximum number of data-cell (<td>) elements that will be rendered beforetrimming will occur over columns, rows or both if needed.[default: 262144] [currently: 262144]
styler.render.max_rows : int, optionalThe maximum number of rows that will be rendered. May still be reduced tosatisfy ``max_elements``, which takes precedence.[default: None] [currently: None]
styler.render.repr : strDetermine which output to use in Jupyter Notebook in {"html", "latex"}.[default: html] [currently: html]
styler.sparse.columns : boolWhether to sparsify the display of hierarchical columns. Setting to False willdisplay each explicit level element in a hierarchical key for each column.[default: True] [currently: True]
styler.sparse.index : boolWhether to sparsify the display of a hierarchical index. Setting to False willdisplay each explicit level element in a hierarchical key for each row.[default: True] [currently: True] 

获取和设置选项

如上所述,get_option()set_option() 可从 pandas 命名空间中调用。要更改选项,请调用 set_option('option regex', new_value)

In [12]: pd.get_option("mode.sim_interactive")
Out[12]: FalseIn [13]: pd.set_option("mode.sim_interactive", True)In [14]: pd.get_option("mode.sim_interactive")
Out[14]: True 

注意

选项'mode.sim_interactive'主要用于调试目的。

您可以使用reset_option()将设置恢复为默认值。

In [15]: pd.get_option("display.max_rows")
Out[15]: 60In [16]: pd.set_option("display.max_rows", 999)In [17]: pd.get_option("display.max_rows")
Out[17]: 999In [18]: pd.reset_option("display.max_rows")In [19]: pd.get_option("display.max_rows")
Out[19]: 60 

还可以一次重置多个选项(使用正则表达式):

In [20]: pd.reset_option("^display") 

option_context() 上下文管理器已通过顶层 API 暴露,允许您使用给定的选项值执行代码。在退出 with 块时,选项值会自动恢复:

In [21]: with pd.option_context("display.max_rows", 10, "display.max_columns", 5):....:    print(pd.get_option("display.max_rows"))....:    print(pd.get_option("display.max_columns"))....: 
10
5In [22]: print(pd.get_option("display.max_rows"))
60In [23]: print(pd.get_option("display.max_columns"))
0 

在 Python/IPython 环境中设置启动选项

使用 Python/IPython 环境的启动脚本导入 pandas 并设置选项可以使与 pandas 的工作更有效率。为此,请在所需配置文件的启动目录中创建一个 .py.ipy 脚本。在默认 IPython 配置文件夹中的启动文件夹的示例可以在以下位置找到:

$IPYTHONDIR/profile_default/startup 

更多信息可以在 IPython 文档 中找到。下面是 pandas 的一个示例启动脚本:

import pandas as pdpd.set_option("display.max_rows", 999)
pd.set_option("display.precision", 5) 

常用选项

以下是更常用的显示选项的演示。

display.max_rowsdisplay.max_columns 设置在美观打印框架时显示的最大行数和列数。截断的行将被省略号替换。

In [24]: df = pd.DataFrame(np.random.randn(7, 2))In [25]: pd.set_option("display.max_rows", 7)In [26]: df
Out[26]: 0         1
0  0.469112 -0.282863
1 -1.509059 -1.135632
2  1.212112 -0.173215
3  0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929  1.071804
6  0.721555 -0.706771In [27]: pd.set_option("display.max_rows", 5)In [28]: df
Out[28]: 0         1
0   0.469112 -0.282863
1  -1.509059 -1.135632
..       ...       ...
5  -0.494929  1.071804
6   0.721555 -0.706771[7 rows x 2 columns]In [29]: pd.reset_option("display.max_rows") 

一旦超过 display.max_rowsdisplay.min_rows 选项确定截断的 repr 中显示多少行。

In [30]: pd.set_option("display.max_rows", 8)In [31]: pd.set_option("display.min_rows", 4)# below max_rows -> all rows shown
In [32]: df = pd.DataFrame(np.random.randn(7, 2))In [33]: df
Out[33]: 0         1
0 -1.039575  0.271860
1 -0.424972  0.567020
2  0.276232 -1.087401
3 -0.673690  0.113648
4 -1.478427  0.524988
5  0.404705  0.577046
6 -1.715002 -1.039268# above max_rows -> only min_rows (4) rows shown
In [34]: df = pd.DataFrame(np.random.randn(9, 2))In [35]: df
Out[35]: 0         1
0  -0.370647 -1.157892
1  -1.344312  0.844885
..       ...       ...
7   0.276662 -0.472035
8  -0.013960 -0.362543[9 rows x 2 columns]In [36]: pd.reset_option("display.max_rows")In [37]: pd.reset_option("display.min_rows") 

display.expand_frame_repr 允许DataFrame 的表示跨越页面,跨越所有列进行换行。

In [38]: df = pd.DataFrame(np.random.randn(5, 10))In [39]: pd.set_option("expand_frame_repr", True)In [40]: df
Out[40]: 0         1         2  ...         7         8         9
0 -0.006154 -0.923061  0.895717  ...  1.340309 -1.170299 -0.226169
1  0.410835  0.813850  0.132003  ... -1.436737 -1.413681  1.607920
2  1.024180  0.569605  0.875906  ... -0.078638  0.545952 -1.219217
3 -1.226825  0.769804 -1.281247  ...  0.341734  0.959726 -1.110336
4 -0.619976  0.149748 -0.732339  ...  0.301624 -2.179861 -1.369849[5 rows x 10 columns]In [41]: pd.set_option("expand_frame_repr", False)In [42]: df
Out[42]: 0         1         2         3         4         5         6         7         8         9
0 -0.006154 -0.923061  0.895717  0.805244 -1.206412  2.565646  1.431256  1.340309 -1.170299 -0.226169
1  0.410835  0.813850  0.132003 -0.827317 -0.076467 -1.187678  1.130127 -1.436737 -1.413681  1.607920
2  1.024180  0.569605  0.875906 -2.211372  0.974466 -2.006747 -0.410001 -0.078638  0.545952 -1.219217
3 -1.226825  0.769804 -1.281247 -0.727707 -0.121306 -0.097883  0.695775  0.341734  0.959726 -1.110336
4 -0.619976  0.149748 -0.732339  0.687738  0.176444  0.403310 -0.154951  0.301624 -2.179861 -1.369849In [43]: pd.reset_option("expand_frame_repr") 

display.large_repr 显示超过 max_columnsmax_rowsDataFrame 为截断的框架或摘要。

In [44]: df = pd.DataFrame(np.random.randn(10, 10))In [45]: pd.set_option("display.max_rows", 5)In [46]: pd.set_option("large_repr", "truncate")In [47]: df
Out[47]: 0         1         2  ...         7         8         9
0  -0.954208  1.462696 -1.743161  ...  0.995761  2.396780  0.014871
1   3.357427 -0.317441 -1.236269  ...  0.380396  0.084844  0.432390
..       ...       ...       ...  ...       ...       ...       ...
8  -0.303421 -0.858447  0.306996  ...  0.476720  0.473424 -0.242861
9  -0.014805 -0.284319  0.650776  ...  1.613616  0.464000  0.227371[10 rows x 10 columns]In [48]: pd.set_option("large_repr", "info")In [49]: df
Out[49]: 
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       10 non-null     float641   1       10 non-null     float642   2       10 non-null     float643   3       10 non-null     float644   4       10 non-null     float645   5       10 non-null     float646   6       10 non-null     float647   7       10 non-null     float648   8       10 non-null     float649   9       10 non-null     float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [50]: pd.reset_option("large_repr")In [51]: pd.reset_option("display.max_rows") 

display.max_colwidth 设置列的最大宽度。超过此长度的单元格将以省略号截断。

In [52]: df = pd.DataFrame(....:    np.array(....:        [....:            ["foo", "bar", "bim", "uncomfortably long string"],....:            ["horse", "cow", "banana", "apple"],....:        ]....:    )....: )....: In [53]: pd.set_option("max_colwidth", 40)In [54]: df
Out[54]: 0    1       2                          3
0    foo  bar     bim  uncomfortably long string
1  horse  cow  banana                      appleIn [55]: pd.set_option("max_colwidth", 6)In [56]: df
Out[56]: 0    1      2      3
0    foo  bar    bim  un...
1  horse  cow  ba...  appleIn [57]: pd.reset_option("max_colwidth") 

display.max_info_columns 设置在调用 info() 时显示的列数阈值。

In [58]: df = pd.DataFrame(np.random.randn(10, 10))In [59]: pd.set_option("max_info_columns", 11)In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       10 non-null     float641   1       10 non-null     float642   2       10 non-null     float643   3       10 non-null     float644   4       10 non-null     float645   5       10 non-null     float646   6       10 non-null     float647   7       10 non-null     float648   8       10 non-null     float649   9       10 non-null     float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [61]: pd.set_option("max_info_columns", 5)In [62]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 928.0 bytesIn [63]: pd.reset_option("max_info_columns") 

display.max_info_rowsinfo() 通常会显示每列的空值计数。对于大型 DataFrame 来说,这可能会相当慢。max_info_rowsmax_info_cols 将此空值检查限制为分别指定的行和列。info() 的关键字参数 show_counts=True 将覆盖此设置。

In [64]: df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10)))In [65]: df
Out[65]: 0    1    2    3    4    5    6    7    8    9
0  0.0  NaN  1.0  NaN  NaN  0.0  NaN  0.0  NaN  1.0
1  1.0  NaN  1.0  1.0  1.0  1.0  NaN  0.0  0.0  NaN
2  0.0  NaN  1.0  0.0  0.0  NaN  NaN  NaN  NaN  0.0
3  NaN  NaN  NaN  0.0  1.0  1.0  NaN  1.0  NaN  1.0
4  0.0  NaN  NaN  NaN  0.0  NaN  NaN  NaN  1.0  0.0
5  0.0  1.0  1.0  1.0  1.0  0.0  NaN  NaN  1.0  0.0
6  1.0  1.0  1.0  NaN  1.0  NaN  1.0  0.0  NaN  NaN
7  0.0  0.0  1.0  0.0  1.0  0.0  1.0  1.0  0.0  NaN
8  NaN  NaN  NaN  0.0  NaN  NaN  NaN  NaN  1.0  NaN
9  0.0  NaN  0.0  NaN  NaN  0.0  NaN  1.0  1.0  0.0In [66]: pd.set_option("max_info_rows", 11)In [67]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 0   0       8 non-null      float641   1       3 non-null      float642   2       7 non-null      float643   3       6 non-null      float644   4       7 non-null      float645   5       6 non-null      float646   6       2 non-null      float647   7       6 non-null      float648   8       6 non-null      float649   9       6 non-null      float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [68]: pd.set_option("max_info_rows", 5)In [69]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):#   Column  Dtype 
---  ------  ----- 0   0       float641   1       float642   2       float643   3       float644   4       float645   5       float646   6       float647   7       float648   8       float649   9       float64
dtypes: float64(10)
memory usage: 928.0 bytesIn [70]: pd.reset_option("max_info_rows") 

display.precision 设置输出显示精度,即小数位数。

In [71]: df = pd.DataFrame(np.random.randn(5, 5))In [72]: pd.set_option("display.precision", 7)In [73]: df
Out[73]: 0          1          2          3          4
0 -1.1506406 -0.7983341 -0.5576966  0.3813531  1.3371217
1 -1.5310949  1.3314582 -0.5713290 -0.0266708 -1.0856630
2 -1.1147378 -0.0582158 -0.4867681  1.6851483  0.1125723
3 -1.4953086  0.8984347 -0.1482168 -1.5960698  0.1596530
4  0.2621358  0.0362196  0.1847350 -0.2550694 -0.2710197In [74]: pd.set_option("display.precision", 4)In [75]: df
Out[75]: 0       1       2       3       4
0 -1.1506 -0.7983 -0.5577  0.3814  1.3371
1 -1.5311  1.3315 -0.5713 -0.0267 -1.0857
2 -1.1147 -0.0582 -0.4868  1.6851  0.1126
3 -1.4953  0.8984 -0.1482 -1.5961  0.1597
4  0.2621  0.0362  0.1847 -0.2551 -0.2710 

display.chop_threshold 设置在显示 SeriesDataFrame 时将舍入阈值设为零。该设置不会改变存储数字的精度。

In [76]: df = pd.DataFrame(np.random.randn(6, 6))In [77]: pd.set_option("chop_threshold", 0)In [78]: df
Out[78]: 0       1       2       3       4       5
0  1.2884  0.2946 -1.1658  0.8470 -0.6856  0.6091
1 -0.3040  0.6256 -0.0593  0.2497  1.1039 -1.0875
2  1.9980 -0.2445  0.1362  0.8863 -1.3507 -0.8863
3 -1.0133  1.9209 -0.3882 -2.3144  0.6655  0.4026
4  0.3996 -1.7660  0.8504  0.3881  0.9923  0.7441
5 -0.7398 -1.0549 -0.1796  0.6396  1.5850  1.9067In [79]: pd.set_option("chop_threshold", 0.5)In [80]: df
Out[80]: 0       1       2       3       4       5
0  1.2884  0.0000 -1.1658  0.8470 -0.6856  0.6091
1  0.0000  0.6256  0.0000  0.0000  1.1039 -1.0875
2  1.9980  0.0000  0.0000  0.8863 -1.3507 -0.8863
3 -1.0133  1.9209  0.0000 -2.3144  0.6655  0.0000
4  0.0000 -1.7660  0.8504  0.0000  0.9923  0.7441
5 -0.7398 -1.0549  0.0000  0.6396  1.5850  1.9067In [81]: pd.reset_option("chop_threshold") 

display.colheader_justify 控制标题的对齐方式。选项有 'right''left'

In [82]: df = pd.DataFrame(....:    np.array([np.random.randn(6), np.random.randint(1, 9, 6) * 0.1, np.zeros(6)]).T,....:    columns=["A", "B", "C"],....:    dtype="float",....: )....: In [83]: pd.set_option("colheader_justify", "right")In [84]: df
Out[84]: A    B    C
0  0.1040  0.1  0.0
1  0.1741  0.5  0.0
2 -0.4395  0.4  0.0
3 -0.7413  0.8  0.0
4 -0.0797  0.4  0.0
5 -0.9229  0.3  0.0In [85]: pd.set_option("colheader_justify", "left")In [86]: df
Out[86]: A       B    C 
0  0.1040  0.1  0.0
1  0.1741  0.5  0.0
2 -0.4395  0.4  0.0
3 -0.7413  0.8  0.0
4 -0.0797  0.4  0.0
5 -0.9229  0.3  0.0In [87]: pd.reset_option("colheader_justify") 

数字格式化

pandas 还允许您设置在控制台中显示数字的方式。此选项不是通过 set_options API 设置的。

使用set_eng_float_format函数来改变 pandas 对象的浮点格式,以产生特定格式。

In [88]: import numpy as npIn [89]: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True)In [90]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])In [91]: s / 1.0e3
Out[91]: 
a    303.638u
b   -721.084u
c   -622.696u
d    648.250u
e     -1.945m
dtype: float64In [92]: s / 1.0e6
Out[92]: 
a    303.638n
b   -721.084n
c   -622.696n
d    648.250n
e     -1.945u
dtype: float64 

使用round()来专门控制单个DataFrame的四舍五入

Unicode 格式化

警告

启用此选项将影响 DataFrame 和 Series 的打印性能(大约慢 2 倍)。仅在实际需要时使用。

一些东亚国家使用 Unicode 字符,其宽度相当于两个拉丁字符。如果 DataFrame 或 Series 包含这些字符,则默认输出模式可能无法正确对齐它们。

In [93]: df = pd.DataFrame({"国籍": ["UK", "日本"], "名前": ["Alice", "しのぶ"]})In [94]: df
Out[94]: 国籍     名前
0  UK  Alice
1  日本    しのぶ 

启用display.unicode.east_asian_width允许 pandas 检查每个字符的“东亚宽度”属性。通过将此选项设置为True,可以正确对齐这些字符。然而,这将导致比标准len函数更长的渲染时间。

In [95]: pd.set_option("display.unicode.east_asian_width", True)In [96]: df
Out[96]: 国籍    名前
0    UK   Alice
1  日本  しのぶ 

此外,Unicode 字符的宽度“模糊”可能是 1 或 2 个字符宽,取决于终端设置或编码。选项display.unicode.ambiguous_as_wide可用于处理这种模糊性。

默认情况下,“模糊”字符的宽度,例如下面示例中的“¡”(倒叹号),被视为 1。

In [97]: df = pd.DataFrame({"a": ["xxx", "¡¡"], "b": ["yyy", "¡¡"]})In [98]: df
Out[98]: a    b
0  xxx  yyy
1   ¡¡   ¡¡ 

启用display.unicode.ambiguous_as_wide使得 pandas 将这些字符的宽度解释为 2。(请注意,此选项仅在启用display.unicode.east_asian_width时才有效。)

然而,为您的终端错误设置此选项将导致这些字符对齐不正确:

In [99]: pd.set_option("display.unicode.ambiguous_as_wide", True)In [100]: df
Out[100]: a     b
0   xxx   yyy
1  ¡¡  ¡¡ 

表模式显示

DataFrameSeries 默认会发布一个表模式表示。可以通过display.html.table_schema选项在全局范围内启用此功能:

In [101]: pd.set_option("display.html.table_schema", True) 

只有'display.max_rows'被序列化和发布。


http://www.ppmy.cn/embedded/15643.html

相关文章

微信小程序中前端 授权登录获取用户的openid

序言&#xff1a; 集百家之所长&#xff0c;方成此篇&#xff0c;废话少说&#xff0c;上代码&#xff1b;找好你的小程序APPID,AppSecret(小程序密钥)&#xff0c;进行配置&#xff0c;然后复制粘贴代码&#xff0c;就可以了。 //微信小程序授权登录获取用户的openidwx.getUse…

系统移植day3

1.分析uboot中make menuconfig执行过程 u-boot是一种开源的引导加载程序&#xff0c;用于嵌入式系统的启动。它负责初始化硬件设备、加载操作系统内核&#xff0c;并提供一些命令行接口和功能。 当我们执行make menuconfig命令时&#xff0c;实际上是在配置u-boot的编译选项。…

Ansible 指定受控端使用Python的版本

最近在装Ansible&#xff0c;有一台受控端Ubuntu16的服务器&#xff0c;安装了Python2.7.12和Pyhon3.5。当用Ansible连接它时&#xff0c;显示使用的是Python3.5。最后看文档&#xff0c;发现Ansible可以在hosts的文件中指定受控服上运行的Python。 现象 受控端 查看Python版…

Axway Titanium打包生成IPA文件的优势

哈喽&#xff0c;大家好呀&#xff0c;淼淼又来和大家见面啦&#xff0c;在移动开发领域中&#xff0c;如何能高效地构建、打包和部署应用程序对许多开发者小伙伴们来说是非常重要且十分具有挑战性的一件事情&#xff0c;而Axway Titanium作为一种跨平台的移动应用开发框架&…

了解DNS洪水攻击

域名系统 (DNS) 服务器是互联网的“电话簿“&#xff1b;互联网设备通过这些服务器来查找特定 Web 服务器以便访问互联网内容。在互联网中&#xff0c;DNS 洪水是一种网络攻击方式。 DNS 洪水攻击是一种分布式拒绝服务 (DDoS) 攻击&#xff0c;攻击者用大量流量淹没某个域的 D…

Pandas连接MySQL数据库

pandas是一个强大的Python工具包&#xff0c;能够快速帮助我们做很多数据处理。但是在利用pandas连接数据库时&#xff0c;也会遇到很多问题&#xff0c;在此我总结了一个相对较为简单的连接范式&#xff0c;供大家参考学习。 先上代码&#xff1a; import pandas as pd# 数据…

代码托管基础操作

在待上传代码文件夹中右键&#xff0c;打开Git Bash Here依次输入以下命令&#xff1a; git init(在本地初始化一个代码仓库&#xff0c;具体表现为会在你的文件夹里出现一个隐藏的.git文件夹) git add .&#xff08;先把代码放到本地的一个缓冲区&#xff09;添加当前目录下的…

SpringMvc的核心组件和执行流程

一、 springmvc的核心组件及作用 1.DispatcherServlet:前置控制器&#xff0c;是整个流程控制的核心&#xff0c;用来控制其他组件的执行&#xff0c;降低了其他组件的耦合性 2.Handler:控制器&#xff0c;完成具体的业务逻辑&#xff0c;当DispatcherServlet接收到请求后&am…