modin.pandas.Series.str.translate¶
- Series.str.translate(table)[source] (https://github.com/snowflakedb/snowpark-python/blob/v1.26.0/snowpark-python/.tox/docs/lib/python3.9/site-packages/modin/pandas/series_utils.py#L523-L524)¶
Map all characters in the string through the given mapping table.
Equivalent to standard
str.translate()
.- Parameters:
table (dict) – Table is a mapping of Unicode ordinals to Unicode ordinals, strings, or None. Unmapped characters are left untouched. Characters mapped to None are deleted.
str.maketrans()
is a helper function for making translation tables.- Return type:
Examples
>>> ser = pd.Series(["El niño", "Françoise"]) >>> mytable = str.maketrans({'ñ': 'n', 'ç': 'c'}) >>> ser.str.translate(mytable) 0 El nino 1 Francoise dtype: object
Notes
Snowpark pandas internally uses the Snowflake SQL TRANSLATE function to implement this operation. Since this function uses strings instead of unicode codepoints, it will accept mappings containing string keys that would be invalid in pandas.
The following example fails silently in vanilla pandas without str.maketrans:
>>> import pandas >>> pandas.Series("aaa").str.translate({"a": "A"}) 0 aaa dtype: object >>> pandas.Series("aaa").str.translate(str.maketrans({"a": "A"})) 0 AAA dtype: object
The same code works in Snowpark pandas without str.maketrans:
>>> pd.Series("aaa").str.translate({"a": "A"}) 0 AAA dtype: object >>> pd.Series("aaa").str.translate(str.maketrans({"a": "A"})) 0 AAA dtype: object
Furthermore, due to restrictions in the underlying SQL, Snowpark pandas currently requires all string values to be one unicode codepoint in length. To create replacements of multiple characters, chain calls to Series.str.replace as needed.
Vanilla pandas code:
>>> import pandas >>> pandas.Series("ab").str.translate(str.maketrans({"a": "A", "b": "BBB"})) 0 ABBB dtype: object
Snowpark pandas equivalent:
>>> pd.Series("ab").str.translate({"a": "A"}).str.replace("b", "BBB") 0 ABBB dtype: object